Test Report: Hyperkit_macOS 17837

                    
                      198b71ba1a4b2ed7cfcde452ea5de14c4e4e06ae:2023-12-19:32352
                    
                

Test fail (7/314)

Order failed test Duration
27 TestOffline 23.2
163 TestImageBuild/serial/Setup 16.15
238 TestRunningBinaryUpgrade 126.97
266 TestNoKubernetes/serial/StartWithK8s 16.3
267 TestNoKubernetes/serial/StartWithStopK8s 6.3
305 TestNetworkPlugins/group/false/Start 15.74
308 TestNetworkPlugins/group/enable-default-cni/Start 15.24
x
+
TestOffline (23.2s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-499000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-499000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 90 (17.609667232s)

                                                
                                                
-- stdout --
	* [offline-docker-499000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node offline-docker-499000 in cluster offline-docker-499000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:36:06.106722   23633 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:36:06.107066   23633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:36:06.107072   23633 out.go:309] Setting ErrFile to fd 2...
	I1219 11:36:06.107076   23633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:36:06.107287   23633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:36:06.109691   23633 out.go:303] Setting JSON to false
	I1219 11:36:06.146100   23633 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7536,"bootTime":1703007030,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:36:06.146221   23633 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:36:06.169758   23633 out.go:177] * [offline-docker-499000] minikube v1.32.0 on Darwin 14.2
	I1219 11:36:06.256825   23633 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:36:06.229030   23633 notify.go:220] Checking for updates...
	I1219 11:36:06.318006   23633 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:36:06.368652   23633 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:36:06.424152   23633 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:36:06.489690   23633 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:36:06.558139   23633 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:36:06.583387   23633 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:36:06.642878   23633 out.go:177] * Using the hyperkit driver based on user configuration
	I1219 11:36:06.693163   23633 start.go:298] selected driver: hyperkit
	I1219 11:36:06.693201   23633 start.go:902] validating driver "hyperkit" against <nil>
	I1219 11:36:06.693229   23633 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:36:06.698337   23633 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:36:06.698964   23633 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:36:06.707240   23633 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:36:06.715746   23633 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:36:06.715782   23633 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1219 11:36:06.715831   23633 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1219 11:36:06.716177   23633 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 11:36:06.716252   23633 cni.go:84] Creating CNI manager for ""
	I1219 11:36:06.716269   23633 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1219 11:36:06.716280   23633 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 11:36:06.716291   23633 start_flags.go:323] config:
	{Name:offline-docker-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:36:06.716495   23633 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:36:06.738048   23633 out.go:177] * Starting control plane node offline-docker-499000 in cluster offline-docker-499000
	I1219 11:36:06.759911   23633 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:36:06.759993   23633 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1219 11:36:06.760016   23633 cache.go:56] Caching tarball of preloaded images
	I1219 11:36:06.760376   23633 preload.go:174] Found /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 11:36:06.760388   23633 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1219 11:36:06.760743   23633 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/offline-docker-499000/config.json ...
	I1219 11:36:06.760778   23633 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/offline-docker-499000/config.json: {Name:mkacdf642b8a1cf13a2aaf60d3d71c660c5c7aa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 11:36:06.761147   23633 start.go:365] acquiring machines lock for offline-docker-499000: {Name:mkc3d80bc77e215fa21f0c59378bcbfaf828d0a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 11:36:06.761473   23633 start.go:369] acquired machines lock for "offline-docker-499000" in 312.332µs
	I1219 11:36:06.761500   23633 start.go:93] Provisioning new machine with config: &{Name:offline-docker-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1219 11:36:06.761548   23633 start.go:125] createHost starting for "" (driver="hyperkit")
	I1219 11:36:06.785048   23633 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1219 11:36:06.785323   23633 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:36:06.785365   23633 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:36:06.793892   23633 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58247
	I1219 11:36:06.794294   23633 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:36:06.794785   23633 main.go:141] libmachine: Using API Version  1
	I1219 11:36:06.794798   23633 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:36:06.795031   23633 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:36:06.795161   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetMachineName
	I1219 11:36:06.795274   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:06.795442   23633 start.go:159] libmachine.API.Create for "offline-docker-499000" (driver="hyperkit")
	I1219 11:36:06.795476   23633 client.go:168] LocalClient.Create starting
	I1219 11:36:06.795509   23633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem
	I1219 11:36:06.795560   23633 main.go:141] libmachine: Decoding PEM data...
	I1219 11:36:06.795588   23633 main.go:141] libmachine: Parsing certificate...
	I1219 11:36:06.795675   23633 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem
	I1219 11:36:06.795719   23633 main.go:141] libmachine: Decoding PEM data...
	I1219 11:36:06.795734   23633 main.go:141] libmachine: Parsing certificate...
	I1219 11:36:06.795757   23633 main.go:141] libmachine: Running pre-create checks...
	I1219 11:36:06.795763   23633 main.go:141] libmachine: (offline-docker-499000) Calling .PreCreateCheck
	I1219 11:36:06.795872   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:06.796076   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetConfigRaw
	I1219 11:36:06.807809   23633 main.go:141] libmachine: Creating machine...
	I1219 11:36:06.807821   23633 main.go:141] libmachine: (offline-docker-499000) Calling .Create
	I1219 11:36:06.807941   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:06.808095   23633 main.go:141] libmachine: (offline-docker-499000) DBG | I1219 11:36:06.807933   23655 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:36:06.808187   23633 main.go:141] libmachine: (offline-docker-499000) Downloading /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17837-20429/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1219 11:36:07.365068   23633 main.go:141] libmachine: (offline-docker-499000) DBG | I1219 11:36:07.364967   23655 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/id_rsa...
	I1219 11:36:07.551716   23633 main.go:141] libmachine: (offline-docker-499000) DBG | I1219 11:36:07.551618   23655 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/offline-docker-499000.rawdisk...
	I1219 11:36:07.551737   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Writing magic tar header
	I1219 11:36:07.551756   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Writing SSH key tar header
	I1219 11:36:07.551972   23633 main.go:141] libmachine: (offline-docker-499000) DBG | I1219 11:36:07.551937   23655 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000 ...
	I1219 11:36:08.048271   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:08.048290   23633 main.go:141] libmachine: (offline-docker-499000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/hyperkit.pid
	I1219 11:36:08.050768   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Using UUID da82a1b8-9ea5-11ee-922f-149d997f80ea
	I1219 11:36:08.206209   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Generated MAC a2:7a:97:db:57:59
	I1219 11:36:08.206235   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-499000
	I1219 11:36:08.206295   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"da82a1b8-9ea5-11ee-922f-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000096390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1219 11:36:08.206365   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"da82a1b8-9ea5-11ee-922f-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000096390)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), C
mdLine:"", process:(*os.Process)(nil)}
	I1219 11:36:08.206474   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "da82a1b8-9ea5-11ee-922f-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/offline-docker-499000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/bz
image,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-499000"}
	I1219 11:36:08.206552   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U da82a1b8-9ea5-11ee-922f-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/offline-docker-499000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/console-ring -f kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/bzimage,/Users/jenkins/minikube-integration/17837-20429/.minikub
e/machines/offline-docker-499000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-499000"
	I1219 11:36:08.206587   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1219 11:36:08.210234   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 DEBUG: hyperkit: Pid is 23678
	I1219 11:36:08.210962   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 0
	I1219 11:36:08.210992   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:08.211154   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:08.212418   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:08.212719   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1219 11:36:08.212743   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:36:08.212776   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:36:08.212795   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:36:08.212823   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:36:08.212843   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:36:08.212861   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:36:08.212874   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:36:08.212882   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:36:08.212895   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:36:08.212909   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:36:08.212921   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:36:08.212965   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:36:08.212986   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:36:08.213003   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:36:08.213021   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:36:08.213035   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:36:08.213049   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:36:08.213072   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:36:08.213089   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:36:08.219576   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1219 11:36:08.247761   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1219 11:36:08.248740   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:36:08.248810   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:36:08.248828   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:36:08.248878   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:36:08.650531   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1219 11:36:08.650551   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1219 11:36:08.754932   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:36:08.754958   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:36:08.754974   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:36:08.754987   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:36:08.756050   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1219 11:36:08.756087   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1219 11:36:10.214364   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 1
	I1219 11:36:10.214383   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:10.214453   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:10.215423   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:10.215497   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1219 11:36:10.215511   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:36:10.215520   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:36:10.215528   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:36:10.215540   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:36:10.215549   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:36:10.215573   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:36:10.215585   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:36:10.215604   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:36:10.215619   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:36:10.215630   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:36:10.215639   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:36:10.215654   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:36:10.215666   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:36:10.215678   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:36:10.215687   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:36:10.215697   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:36:10.215707   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:36:10.215731   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:36:10.215743   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:36:12.216992   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 2
	I1219 11:36:12.217022   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:12.217140   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:12.218239   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:12.218313   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1219 11:36:12.218333   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:36:12.218363   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:36:12.218379   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:36:12.218410   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:36:12.218431   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:36:12.218464   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:36:12.218481   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:36:12.218499   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:36:12.218520   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:36:12.218532   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:36:12.218561   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:36:12.218572   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:36:12.218581   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:36:12.218603   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:36:12.218617   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:36:12.218628   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:36:12.218673   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:36:12.218688   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:36:12.218700   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:36:14.221060   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 3
	I1219 11:36:14.221082   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:14.221110   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:14.222165   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:14.222222   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1219 11:36:14.222233   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:36:14.222266   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:36:14.222282   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:36:14.222292   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:36:14.222302   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:36:14.222311   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:36:14.222318   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:36:14.222327   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:36:14.222340   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:36:14.222352   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:36:14.222360   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:36:14.222385   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:36:14.222394   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:36:14.222401   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:36:14.222427   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:36:14.222442   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:36:14.222452   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:36:14.222462   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:36:14.222480   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:36:14.443593   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1219 11:36:14.443700   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1219 11:36:14.443709   23633 main.go:141] libmachine: (offline-docker-499000) DBG | 2023/12/19 11:36:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1219 11:36:16.223346   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 4
	I1219 11:36:16.223374   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:16.223456   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:16.224904   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:16.224983   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I1219 11:36:16.224998   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:36:16.225027   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:36:16.225044   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:36:16.225058   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:36:16.225070   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:36:16.225084   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:36:16.225094   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:36:16.225108   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:36:16.225121   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:36:16.225135   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:36:16.225146   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:36:16.225160   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:36:16.225170   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:36:16.225179   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:36:16.225188   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:36:16.225196   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:36:16.225202   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:36:16.225217   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:36:16.225228   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:36:18.225980   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Attempt 5
	I1219 11:36:18.226000   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:18.226099   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:18.227129   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Searching for a2:7a:97:db:57:59 in /var/db/dhcpd_leases ...
	I1219 11:36:18.227230   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found 20 entries in /var/db/dhcpd_leases!
	I1219 11:36:18.227243   23633 main.go:141] libmachine: (offline-docker-499000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:36:18.227262   23633 main.go:141] libmachine: (offline-docker-499000) DBG | Found match: a2:7a:97:db:57:59
	I1219 11:36:18.227273   23633 main.go:141] libmachine: (offline-docker-499000) DBG | IP: 192.168.172.5
	I1219 11:36:18.227278   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetConfigRaw
	I1219 11:36:18.255186   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:18.255437   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:18.255615   23633 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1219 11:36:18.255631   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetState
	I1219 11:36:18.255780   23633 main.go:141] libmachine: (offline-docker-499000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:36:18.255880   23633 main.go:141] libmachine: (offline-docker-499000) DBG | hyperkit pid from json: 23678
	I1219 11:36:18.257102   23633 main.go:141] libmachine: Detecting operating system of created instance...
	I1219 11:36:18.257117   23633 main.go:141] libmachine: Waiting for SSH to be available...
	I1219 11:36:18.257123   23633 main.go:141] libmachine: Getting to WaitForSSH function...
	I1219 11:36:18.257131   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:18.257282   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:18.257440   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:18.257597   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:18.257711   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:18.257940   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:18.258311   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:18.258321   23633 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1219 11:36:19.316013   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:36:19.316027   23633 main.go:141] libmachine: Detecting the provisioner...
	I1219 11:36:19.316034   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.316205   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.316343   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.316477   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.316577   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.316729   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.317059   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.317069   23633 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1219 11:36:19.377655   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1219 11:36:19.377756   23633 main.go:141] libmachine: found compatible host: buildroot
	I1219 11:36:19.377766   23633 main.go:141] libmachine: Provisioning with buildroot...
	I1219 11:36:19.377773   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetMachineName
	I1219 11:36:19.377950   23633 buildroot.go:166] provisioning hostname "offline-docker-499000"
	I1219 11:36:19.377962   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetMachineName
	I1219 11:36:19.378059   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.378172   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.378313   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.378405   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.378492   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.378643   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.378922   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.378933   23633 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-499000 && echo "offline-docker-499000" | sudo tee /etc/hostname
	I1219 11:36:19.447409   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-499000
	
	I1219 11:36:19.447432   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.447585   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.447688   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.447819   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.447926   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.448041   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.448288   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.448303   23633 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-499000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-499000/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-499000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 11:36:19.515051   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:36:19.515073   23633 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17837-20429/.minikube CaCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17837-20429/.minikube}
	I1219 11:36:19.515088   23633 buildroot.go:174] setting up certificates
	I1219 11:36:19.515100   23633 provision.go:83] configureAuth start
	I1219 11:36:19.515108   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetMachineName
	I1219 11:36:19.515307   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetIP
	I1219 11:36:19.515425   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.515526   23633 provision.go:138] copyHostCerts
	I1219 11:36:19.515625   23633 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem, removing ...
	I1219 11:36:19.515636   23633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem
	I1219 11:36:19.515864   23633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem (1082 bytes)
	I1219 11:36:19.518865   23633 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem, removing ...
	I1219 11:36:19.518883   23633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem
	I1219 11:36:19.518964   23633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem (1123 bytes)
	I1219 11:36:19.519184   23633 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem, removing ...
	I1219 11:36:19.519190   23633 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem
	I1219 11:36:19.519281   23633 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem (1679 bytes)
	I1219 11:36:19.521755   23633 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem org=jenkins.offline-docker-499000 san=[192.168.172.5 192.168.172.5 localhost 127.0.0.1 minikube offline-docker-499000]
	I1219 11:36:19.594130   23633 provision.go:172] copyRemoteCerts
	I1219 11:36:19.594196   23633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 11:36:19.594217   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.594383   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.594481   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.594578   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.594675   23633 sshutil.go:53] new ssh client: &{IP:192.168.172.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/id_rsa Username:docker}
	I1219 11:36:19.631319   23633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 11:36:19.651637   23633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1219 11:36:19.671808   23633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 11:36:19.689864   23633 provision.go:86] duration metric: configureAuth took 174.688565ms
	I1219 11:36:19.689879   23633 buildroot.go:189] setting minikube options for container-runtime
	I1219 11:36:19.690021   23633 config.go:182] Loaded profile config "offline-docker-499000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:36:19.690036   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:19.690177   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.690284   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.690403   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.690510   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.690633   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.690747   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.691000   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.691010   23633 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1219 11:36:19.749893   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1219 11:36:19.749907   23633 buildroot.go:70] root file system type: tmpfs
	I1219 11:36:19.750013   23633 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1219 11:36:19.750028   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.750186   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.750300   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.750406   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.750502   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.750622   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.750891   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.750940   23633 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1219 11:36:19.819673   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1219 11:36:19.819716   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:19.819865   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:19.819948   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.820039   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:19.820192   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:19.820319   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:19.820604   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:19.820617   23633 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1219 11:36:20.465955   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1219 11:36:20.465973   23633 main.go:141] libmachine: Checking connection to Docker...
	I1219 11:36:20.465986   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetURL
	I1219 11:36:20.466167   23633 main.go:141] libmachine: Docker is up and running!
	I1219 11:36:20.466175   23633 main.go:141] libmachine: Reticulating splines...
	I1219 11:36:20.466181   23633 client.go:171] LocalClient.Create took 13.664176871s
	I1219 11:36:20.466205   23633 start.go:167] duration metric: libmachine.API.Create for "offline-docker-499000" took 13.664241397s
	I1219 11:36:20.466230   23633 start.go:300] post-start starting for "offline-docker-499000" (driver="hyperkit")
	I1219 11:36:20.466243   23633 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 11:36:20.466255   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:20.466452   23633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 11:36:20.466467   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:20.466553   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:20.466638   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:20.466741   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:20.466887   23633 sshutil.go:53] new ssh client: &{IP:192.168.172.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/id_rsa Username:docker}
	I1219 11:36:20.506199   23633 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 11:36:20.508994   23633 info.go:137] Remote host: Buildroot 2021.02.12
	I1219 11:36:20.509011   23633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/addons for local assets ...
	I1219 11:36:20.509116   23633 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/files for local assets ...
	I1219 11:36:20.511720   23633 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem -> 208672.pem in /etc/ssl/certs
	I1219 11:36:20.511987   23633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 11:36:20.519023   23633 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem --> /etc/ssl/certs/208672.pem (1708 bytes)
	I1219 11:36:20.537096   23633 start.go:303] post-start completed in 70.828048ms
	I1219 11:36:20.537130   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetConfigRaw
	I1219 11:36:20.537795   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetIP
	I1219 11:36:20.537963   23633 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/offline-docker-499000/config.json ...
	I1219 11:36:20.538354   23633 start.go:128] duration metric: createHost completed in 13.770252568s
	I1219 11:36:20.538376   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:20.538493   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:20.538602   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:20.538702   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:20.538800   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:20.538917   23633 main.go:141] libmachine: Using SSH client type: native
	I1219 11:36:20.539182   23633 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.5 22 <nil> <nil>}
	I1219 11:36:20.539191   23633 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1219 11:36:20.599390   23633 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703014580.722572095
	
	I1219 11:36:20.599406   23633 fix.go:206] guest clock: 1703014580.722572095
	I1219 11:36:20.599412   23633 fix.go:219] Guest: 2023-12-19 11:36:20.722572095 -0800 PST Remote: 2023-12-19 11:36:20.538367 -0800 PST m=+14.484266776 (delta=184.205095ms)
	I1219 11:36:20.599433   23633 fix.go:190] guest clock delta is within tolerance: 184.205095ms
	I1219 11:36:20.599437   23633 start.go:83] releasing machines lock for "offline-docker-499000", held for 13.831387972s
	I1219 11:36:20.599457   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:20.599598   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetIP
	I1219 11:36:20.634246   23633 out.go:177] * Found network options:
	I1219 11:36:20.684027   23633 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W1219 11:36:20.707186   23633 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.172.5).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.172.5).
	I1219 11:36:20.728035   23633 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1219 11:36:20.774147   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:20.774793   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:20.774982   23633 main.go:141] libmachine: (offline-docker-499000) Calling .DriverName
	I1219 11:36:20.775138   23633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 11:36:20.775185   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:20.775256   23633 ssh_runner.go:195] Run: cat /version.json
	I1219 11:36:20.775317   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHHostname
	I1219 11:36:20.775356   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:20.775530   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:20.775543   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHPort
	I1219 11:36:20.775727   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:20.775757   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHKeyPath
	I1219 11:36:20.775879   23633 sshutil.go:53] new ssh client: &{IP:192.168.172.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/id_rsa Username:docker}
	I1219 11:36:20.775928   23633 main.go:141] libmachine: (offline-docker-499000) Calling .GetSSHUsername
	I1219 11:36:20.776130   23633 sshutil.go:53] new ssh client: &{IP:192.168.172.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/offline-docker-499000/id_rsa Username:docker}
	I1219 11:36:20.808516   23633 ssh_runner.go:195] Run: systemctl --version
	I1219 11:36:20.813398   23633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 11:36:20.860765   23633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 11:36:20.860817   23633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 11:36:20.872733   23633 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 11:36:20.872750   23633 start.go:475] detecting cgroup driver to use...
	I1219 11:36:20.872871   23633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:36:20.887062   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1219 11:36:20.896497   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 11:36:20.905314   23633 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 11:36:20.905377   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 11:36:20.913657   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:36:20.921829   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 11:36:20.929732   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:36:20.937367   23633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 11:36:20.945087   23633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 11:36:20.952903   23633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 11:36:20.960969   23633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 11:36:20.968677   23633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:36:21.063030   23633 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 11:36:21.081260   23633 start.go:475] detecting cgroup driver to use...
	I1219 11:36:21.081346   23633 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1219 11:36:21.094497   23633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:36:21.106216   23633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 11:36:21.119614   23633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:36:21.149885   23633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:36:21.159443   23633 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 11:36:21.202990   23633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:36:21.213823   23633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:36:21.231231   23633 ssh_runner.go:195] Run: which cri-dockerd
	I1219 11:36:21.234485   23633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1219 11:36:21.241983   23633 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1219 11:36:21.254708   23633 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1219 11:36:21.354614   23633 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1219 11:36:21.463548   23633 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1219 11:36:21.463648   23633 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1219 11:36:21.477458   23633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:36:21.587417   23633 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1219 11:36:23.005340   23633 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.41747409s)
	I1219 11:36:23.005400   23633 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:36:23.120871   23633 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1219 11:36:23.240837   23633 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:36:23.357331   23633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:36:23.474334   23633 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1219 11:36:23.491030   23633 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1219 11:36:23.531181   23633 out.go:177] 
	W1219 11:36:23.552294   23633 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:36:16 UTC, ends at Tue 2023-12-19 19:36:23 UTC. --
	Dec 19 19:36:17 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:36:17 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:36:16 UTC, ends at Tue 2023-12-19 19:36:23 UTC. --
	Dec 19 19:36:17 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:36:17 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:36:20 offline-docker-499000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:36:23 offline-docker-499000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1219 11:36:23.552313   23633 out.go:239] * 
	* 
	W1219 11:36:23.553299   23633 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 11:36:23.624459   23633 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-499000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 90
panic.go:523: *** TestOffline FAILED at 2023-12-19 11:36:23.663939 -0800 PST m=+2051.423470696
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-499000 -n offline-docker-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-499000 -n offline-docker-499000: exit status 6 (198.701047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 11:36:23.846428   23769 status.go:415] kubeconfig endpoint: extract IP: "offline-docker-499000" does not appear in /Users/jenkins/minikube-integration/17837-20429/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "offline-docker-499000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "offline-docker-499000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-499000
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current901608591/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current901608591/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current901608591/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-499000: (5.379306689s)
--- FAIL: TestOffline (23.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (16.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-849000 --driver=hyperkit 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-849000 --driver=hyperkit : exit status 90 (15.996335322s)

                                                
                                                
-- stdout --
	* [image-849000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node image-849000 in cluster image-849000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:13:49 UTC, ends at Tue 2023-12-19 19:13:55 UTC. --
	Dec 19 19:13:50 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:13:50 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:13:52 image-849000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:13:52 image-849000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:13:52 image-849000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:13:52 image-849000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:13:52 image-849000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:13:55 image-849000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:13:55 image-849000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:13:55 image-849000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:13:55 image-849000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:13:55 image-849000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-849000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p image-849000 -n image-849000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p image-849000 -n image-849000: exit status 6 (148.319144ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 11:13:55.708543   22230 status.go:415] kubeconfig endpoint: extract IP: "image-849000" does not appear in /Users/jenkins/minikube-integration/17837-20429/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-849000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (16.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.2907113335.exe start -p running-upgrade-403000 --memory=2200 --vm-driver=hyperkit 
E1219 11:41:46.985459   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:42:10.730413   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:42:10.884465   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.2907113335.exe start -p running-upgrade-403000 --memory=2200 --vm-driver=hyperkit : (1m45.013688241s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1219 11:43:32.806992   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (19.495727638s)

                                                
                                                
-- stdout --
	* [running-upgrade-403000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the hyperkit driver based on existing profile
	* Starting control plane node running-upgrade-403000 in cluster running-upgrade-403000
	* Updating the running hyperkit "running-upgrade-403000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:43:16.465490   24302 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:43:16.465838   24302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:43:16.465844   24302 out.go:309] Setting ErrFile to fd 2...
	I1219 11:43:16.465848   24302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:43:16.466033   24302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:43:16.467820   24302 out.go:303] Setting JSON to false
	I1219 11:43:16.491339   24302 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":7966,"bootTime":1703007030,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:43:16.491429   24302 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:43:16.512987   24302 out.go:177] * [running-upgrade-403000] minikube v1.32.0 on Darwin 14.2
	I1219 11:43:16.608872   24302 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:43:16.571916   24302 notify.go:220] Checking for updates...
	I1219 11:43:16.650858   24302 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:43:16.697553   24302 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:43:16.718660   24302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:43:16.760444   24302 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:43:16.781456   24302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:43:16.802879   24302 config.go:182] Loaded profile config "running-upgrade-403000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1219 11:43:16.802905   24302 start_flags.go:694] config upgrade: Driver=hyperkit
	I1219 11:43:16.802912   24302 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0
	I1219 11:43:16.802986   24302 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/running-upgrade-403000/config.json ...
	I1219 11:43:16.803785   24302 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:43:16.803833   24302 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:43:16.812393   24302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58919
	I1219 11:43:16.812756   24302 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:43:16.813197   24302 main.go:141] libmachine: Using API Version  1
	I1219 11:43:16.813208   24302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:43:16.813438   24302 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:43:16.813543   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:16.834336   24302 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1219 11:43:16.855433   24302 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:43:16.855710   24302 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:43:16.855743   24302 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:43:16.863798   24302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58921
	I1219 11:43:16.864153   24302 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:43:16.864518   24302 main.go:141] libmachine: Using API Version  1
	I1219 11:43:16.864536   24302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:43:16.864797   24302 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:43:16.864924   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:16.914282   24302 out.go:177] * Using the hyperkit driver based on existing profile
	I1219 11:43:16.935544   24302 start.go:298] selected driver: hyperkit
	I1219 11:43:16.935559   24302 start.go:902] validating driver "hyperkit" against &{Name:running-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.172.13 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 11:43:16.935654   24302 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:43:16.938815   24302 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:16.938904   24302 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:43:16.947028   24302 install.go:137] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:43:16.951401   24302 install.go:79] stdout: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:43:16.951425   24302 install.go:81] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit looks good
	I1219 11:43:16.951569   24302 cni.go:84] Creating CNI manager for ""
	I1219 11:43:16.951587   24302 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1219 11:43:16.951597   24302 start_flags.go:323] config:
	{Name:running-upgrade-403000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.172.13 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1219 11:43:16.951784   24302 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:16.993382   24302 out.go:177] * Starting control plane node running-upgrade-403000 in cluster running-upgrade-403000
	I1219 11:43:17.014561   24302 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1219 11:43:17.107028   24302 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1219 11:43:17.107109   24302 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/running-upgrade-403000/config.json ...
	I1219 11:43:17.107228   24302 cache.go:107] acquiring lock: {Name:mka7cd9d1685bc8f72bce21340d35963c90be440 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107227   24302 cache.go:107] acquiring lock: {Name:mk047f59ff6cb49df4c4fb7aeadf73de34ff50ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107248   24302 cache.go:107] acquiring lock: {Name:mk1473c2fcaca16537bef871006e5af9807a15b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107314   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1219 11:43:17.107319   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1219 11:43:17.107336   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1219 11:43:17.107335   24302 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 114.279µs
	I1219 11:43:17.107335   24302 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.51µs
	I1219 11:43:17.107364   24302 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1219 11:43:17.107366   24302 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1219 11:43:17.107362   24302 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 137µs
	I1219 11:43:17.107376   24302 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1219 11:43:17.107340   24302 cache.go:107] acquiring lock: {Name:mkebe1c319b833f4bf8dc4eaad61da59ef8d46f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107414   24302 cache.go:107] acquiring lock: {Name:mk7da2374008ad0b6a853f867d309dbe22d8698f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107423   24302 cache.go:107] acquiring lock: {Name:mk0eb5f1a68dcb02a2057b62cc7e577795e71e7a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107463   24302 cache.go:107] acquiring lock: {Name:mk12955d23d6c20626de70916fe12b77952f8c12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107487   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1219 11:43:17.107503   24302 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 223.478µs
	I1219 11:43:17.107485   24302 cache.go:107] acquiring lock: {Name:mk22ffb4ed6124f66488e91de7a2eda4e5e04b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:43:17.107519   24302 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1219 11:43:17.107563   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1219 11:43:17.107577   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1219 11:43:17.107581   24302 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 243.417µs
	I1219 11:43:17.107590   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1219 11:43:17.107596   24302 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1219 11:43:17.107595   24302 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 216.452µs
	I1219 11:43:17.107602   24302 cache.go:115] /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1219 11:43:17.107603   24302 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1219 11:43:17.107600   24302 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 160.528µs
	I1219 11:43:17.107612   24302 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1219 11:43:17.107611   24302 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 184.188µs
	I1219 11:43:17.107618   24302 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1219 11:43:17.107625   24302 cache.go:87] Successfully saved all images to host disk.
	I1219 11:43:17.107776   24302 start.go:365] acquiring machines lock for running-upgrade-403000: {Name:mkc3d80bc77e215fa21f0c59378bcbfaf828d0a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 11:43:21.317337   24302 start.go:369] acquired machines lock for "running-upgrade-403000" in 4.209496424s
	I1219 11:43:21.317393   24302 start.go:96] Skipping create...Using existing machine configuration
	I1219 11:43:21.317406   24302 fix.go:54] fixHost starting: minikube
	I1219 11:43:21.317674   24302 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:43:21.317700   24302 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:43:21.326095   24302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58939
	I1219 11:43:21.326435   24302 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:43:21.326784   24302 main.go:141] libmachine: Using API Version  1
	I1219 11:43:21.326804   24302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:43:21.327038   24302 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:43:21.327151   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:21.327248   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetState
	I1219 11:43:21.327336   24302 main.go:141] libmachine: (running-upgrade-403000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:43:21.327419   24302 main.go:141] libmachine: (running-upgrade-403000) DBG | hyperkit pid from json: 24213
	I1219 11:43:21.328457   24302 fix.go:102] recreateIfNeeded on running-upgrade-403000: state=Running err=<nil>
	W1219 11:43:21.328485   24302 fix.go:128] unexpected machine state, will restart: <nil>
	I1219 11:43:21.351783   24302 out.go:177] * Updating the running hyperkit "running-upgrade-403000" VM ...
	I1219 11:43:21.388328   24302 machine.go:88] provisioning docker machine ...
	I1219 11:43:21.388355   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:21.388580   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetMachineName
	I1219 11:43:21.388769   24302 buildroot.go:166] provisioning hostname "running-upgrade-403000"
	I1219 11:43:21.388785   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetMachineName
	I1219 11:43:21.388920   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.389065   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.389229   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.389353   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.389527   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.389682   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:21.390133   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:21.390166   24302 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-403000 && echo "running-upgrade-403000" | sudo tee /etc/hostname
	I1219 11:43:21.454067   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-403000
	
	I1219 11:43:21.454086   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.454227   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.454338   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.454444   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.454548   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.454698   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:21.454948   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:21.454968   24302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-403000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-403000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-403000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 11:43:21.513739   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:43:21.513759   24302 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17837-20429/.minikube CaCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17837-20429/.minikube}
	I1219 11:43:21.513777   24302 buildroot.go:174] setting up certificates
	I1219 11:43:21.513788   24302 provision.go:83] configureAuth start
	I1219 11:43:21.513795   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetMachineName
	I1219 11:43:21.513935   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetIP
	I1219 11:43:21.514033   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.514153   24302 provision.go:138] copyHostCerts
	I1219 11:43:21.514227   24302 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem, removing ...
	I1219 11:43:21.514237   24302 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem
	I1219 11:43:21.514375   24302 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem (1082 bytes)
	I1219 11:43:21.514594   24302 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem, removing ...
	I1219 11:43:21.514601   24302 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem
	I1219 11:43:21.514670   24302 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem (1123 bytes)
	I1219 11:43:21.514836   24302 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem, removing ...
	I1219 11:43:21.514842   24302 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem
	I1219 11:43:21.514902   24302 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem (1679 bytes)
	I1219 11:43:21.515039   24302 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-403000 san=[192.168.172.13 192.168.172.13 localhost 127.0.0.1 minikube running-upgrade-403000]
	I1219 11:43:21.581415   24302 provision.go:172] copyRemoteCerts
	I1219 11:43:21.581467   24302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 11:43:21.581486   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.581626   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.581723   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.581817   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.581915   24302 sshutil.go:53] new ssh client: &{IP:192.168.172.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/running-upgrade-403000/id_rsa Username:docker}
	I1219 11:43:21.614769   24302 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1219 11:43:21.624775   24302 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 11:43:21.634640   24302 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 11:43:21.644242   24302 provision.go:86] duration metric: configureAuth took 130.438145ms
	I1219 11:43:21.644255   24302 buildroot.go:189] setting minikube options for container-runtime
	I1219 11:43:21.644390   24302 config.go:182] Loaded profile config "running-upgrade-403000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1219 11:43:21.644408   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:21.644555   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.644656   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.644750   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.644846   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.644932   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.645049   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:21.645308   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:21.645317   24302 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1219 11:43:21.706686   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1219 11:43:21.706699   24302 buildroot.go:70] root file system type: tmpfs
	I1219 11:43:21.706781   24302 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1219 11:43:21.706796   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.706929   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.707032   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.707125   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.707238   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.707375   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:21.707620   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:21.707666   24302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1219 11:43:21.777930   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1219 11:43:21.777955   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:21.778099   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:21.778394   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.778520   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:21.778681   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:21.778845   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:21.779095   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:21.779108   24302 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1219 11:43:33.893456   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1219 11:43:33.893471   24302 machine.go:91] provisioned docker machine in 12.505015041s
	I1219 11:43:33.893490   24302 start.go:300] post-start starting for "running-upgrade-403000" (driver="hyperkit")
	I1219 11:43:33.893502   24302 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 11:43:33.893514   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:33.893691   24302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 11:43:33.893708   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:33.893799   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:33.893904   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:33.893994   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:33.894069   24302 sshutil.go:53] new ssh client: &{IP:192.168.172.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/running-upgrade-403000/id_rsa Username:docker}
	I1219 11:43:33.926425   24302 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 11:43:33.928955   24302 info.go:137] Remote host: Buildroot 2019.02.7
	I1219 11:43:33.928968   24302 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/addons for local assets ...
	I1219 11:43:33.929064   24302 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/files for local assets ...
	I1219 11:43:33.929246   24302 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem -> 208672.pem in /etc/ssl/certs
	I1219 11:43:33.929452   24302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 11:43:33.933054   24302 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem --> /etc/ssl/certs/208672.pem (1708 bytes)
	I1219 11:43:33.941981   24302 start.go:303] post-start completed in 48.482782ms
	I1219 11:43:33.941994   24302 fix.go:56] fixHost completed within 12.624481272s
	I1219 11:43:33.942009   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:33.942138   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:33.942218   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:33.942298   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:33.942379   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:33.942486   24302 main.go:141] libmachine: Using SSH client type: native
	I1219 11:43:33.942731   24302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.13 22 <nil> <nil>}
	I1219 11:43:33.942738   24302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1219 11:43:34.001688   24302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703015014.152671639
	
	I1219 11:43:34.001703   24302 fix.go:206] guest clock: 1703015014.152671639
	I1219 11:43:34.001715   24302 fix.go:219] Guest: 2023-12-19 11:43:34.152671639 -0800 PST Remote: 2023-12-19 11:43:33.941998 -0800 PST m=+17.523388358 (delta=210.673639ms)
	I1219 11:43:34.001735   24302 fix.go:190] guest clock delta is within tolerance: 210.673639ms
	I1219 11:43:34.001738   24302 start.go:83] releasing machines lock for "running-upgrade-403000", held for 12.684274484s
	I1219 11:43:34.001758   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:34.001888   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetIP
	I1219 11:43:34.001991   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:34.002283   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:34.002387   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .DriverName
	I1219 11:43:34.002463   24302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 11:43:34.002495   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:34.002511   24302 ssh_runner.go:195] Run: cat /version.json
	I1219 11:43:34.002526   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHHostname
	I1219 11:43:34.002581   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:34.002614   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHPort
	I1219 11:43:34.002674   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:34.002695   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHKeyPath
	I1219 11:43:34.002781   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:34.002797   24302 main.go:141] libmachine: (running-upgrade-403000) Calling .GetSSHUsername
	I1219 11:43:34.002879   24302 sshutil.go:53] new ssh client: &{IP:192.168.172.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/running-upgrade-403000/id_rsa Username:docker}
	I1219 11:43:34.002894   24302 sshutil.go:53] new ssh client: &{IP:192.168.172.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/running-upgrade-403000/id_rsa Username:docker}
	W1219 11:43:34.033727   24302 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1219 11:43:34.033798   24302 ssh_runner.go:195] Run: systemctl --version
	I1219 11:43:34.036895   24302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 11:43:34.088200   24302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 11:43:34.088258   24302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1219 11:43:34.091794   24302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1219 11:43:34.095125   24302 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1219 11:43:34.095139   24302 start.go:475] detecting cgroup driver to use...
	I1219 11:43:34.095235   24302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:43:34.102978   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1219 11:43:34.107068   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 11:43:34.111256   24302 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 11:43:34.111299   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 11:43:34.115623   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:43:34.119824   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 11:43:34.123753   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:43:34.128001   24302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 11:43:34.132445   24302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 11:43:34.136799   24302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 11:43:34.140158   24302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 11:43:34.143759   24302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:43:34.225999   24302 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 11:43:34.241458   24302 start.go:475] detecting cgroup driver to use...
	I1219 11:43:34.241559   24302 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1219 11:43:34.253335   24302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:43:34.261806   24302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 11:43:34.288774   24302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:43:34.298321   24302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:43:34.307412   24302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:43:34.316783   24302 ssh_runner.go:195] Run: which cri-dockerd
	I1219 11:43:34.319554   24302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1219 11:43:34.324290   24302 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1219 11:43:34.331601   24302 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1219 11:43:34.409414   24302 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1219 11:43:34.469719   24302 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1219 11:43:34.469800   24302 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1219 11:43:34.477081   24302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:43:34.539281   24302 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1219 11:43:35.735467   24302 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.196158584s)
	I1219 11:43:35.735546   24302 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1219 11:43:35.776385   24302 out.go:177] 
	W1219 11:43:35.798257   24302 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2023-12-19 19:41:48 UTC, end at Tue 2023-12-19 19:43:35 UTC. --
	Dec 19 19:41:54 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.677004627Z" level=info msg="Starting up"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.677982556Z" level=info msg="libcontainerd: started new containerd process" pid=1997
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678026180Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678035097Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678046429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678056037Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700516666Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700742354Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700798405Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.701039910Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.701076691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702029150Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702159140Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702256727Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702391599Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702566940Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702610366Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702668854Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702704987Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702736156Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711224595Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711318177Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711398228Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711449404Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711486302Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711522422Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711560939Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711596551Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711630858Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711667092Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711786765Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711876443Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712188909Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712245676Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712295512Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712334541Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712370557Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712404800Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712438525Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712473372Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712517129Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712555723Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712592078Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712652392Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712691284Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712726200Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712760948Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712869880Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712933704Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712971818Z" level=info msg="containerd successfully booted in 0.013122s"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721773956Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721861232Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721877933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721887212Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722761465Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722800349Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722822230Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722833618Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736484339Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736536635Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736545203Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736551991Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736556618Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736560813Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736708417Z" level=info msg="Loading containers: start."
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.797037790Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.834503434Z" level=info msg="Loading containers: done."
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.849388265Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.849521811Z" level=info msg="Daemon has completed initialization"
	Dec 19 19:41:54 running-upgrade-403000 systemd[1]: Started Docker Application Container Engine.
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.864966722Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.865040936Z" level=info msg="API listen on [::]:2376"
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.420134946Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7bb75be40713ad2d5ae1c099978e7bc1daf0a2a7ddce717f3ce688fcdd8eb8a1/shim.sock" debug=false pid=3701
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.422525139Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5c3a9b826a2be93ea41767b1780d6ca52d179ac21d91b6159f90eeeae192fd29/shim.sock" debug=false pid=3702
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.506054209Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6f1e1cde7ccbea076c0ed1958a7fa67e0dc0ca3704a74e9f98df66bd5984d68f/shim.sock" debug=false pid=3746
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.520114696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a298c8b46bbe8619534bc667786f65c6d2c647c38c982aacf96394f1e666981/shim.sock" debug=false pid=3765
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.540231550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cae950f5b112fa34a107a8ca7bfa00ac17e37e8189bf275fa80a9c31ac6a2260/shim.sock" debug=false pid=3788
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.708011320Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c/shim.sock" debug=false pid=3924
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.725809407Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30/shim.sock" debug=false pid=3945
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.819095160Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a/shim.sock" debug=false pid=4005
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.836359845Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67/shim.sock" debug=false pid=4013
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.882317243Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e/shim.sock" debug=false pid=4053
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.138806412Z" level=info msg="Processing signal 'terminated'"
	Dec 19 19:43:22 running-upgrade-403000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.451935685Z" level=info msg="shim reaped" id=5c3a9b826a2be93ea41767b1780d6ca52d179ac21d91b6159f90eeeae192fd29
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.454102365Z" level=info msg="shim reaped" id=012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.454477275Z" level=info msg="shim reaped" id=25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.462088824Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.464881913Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.464917051Z" level=warning msg="25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.471806262Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.472344235Z" level=warning msg="012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.473408997Z" level=info msg="shim reaped" id=6f1e1cde7ccbea076c0ed1958a7fa67e0dc0ca3704a74e9f98df66bd5984d68f
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.474974525Z" level=info msg="shim reaped" id=0a298c8b46bbe8619534bc667786f65c6d2c647c38c982aacf96394f1e666981
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.475100483Z" level=info msg="shim reaped" id=844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.482981119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.483063996Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.485258483Z" level=warning msg="844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.486918295Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.489477921Z" level=info msg="shim reaped" id=7bb75be40713ad2d5ae1c099978e7bc1daf0a2a7ddce717f3ce688fcdd8eb8a1
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.500680541Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.510942444Z" level=info msg="shim reaped" id=e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.520217576Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.523427546Z" level=info msg="shim reaped" id=cae950f5b112fa34a107a8ca7bfa00ac17e37e8189bf275fa80a9c31ac6a2260
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.523736307Z" level=warning msg="e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.533184800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:27 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:27.640288988Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca/shim.sock" debug=false pid=5320
	Dec 19 19:43:27 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:27.804573079Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda/shim.sock" debug=false pid=5380
	Dec 19 19:43:28 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:28.256759130Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a/shim.sock" debug=false pid=5431
	Dec 19 19:43:28 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:28.419915625Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68/shim.sock" debug=false pid=5484
	Dec 19 19:43:30 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:30.647972112Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738/shim.sock" debug=false pid=5606
	Dec 19 19:43:30 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:30.802334775Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f/shim.sock" debug=false pid=5651
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.204768698Z" level=info msg="Container aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c failed to exit within 10 seconds of signal 15 - using the force"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.281818738Z" level=info msg="shim reaped" id=aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.292045007Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.292173312Z" level=warning msg="aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330054585Z" level=info msg="Daemon shutdown complete"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330124793Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330190332Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330390860Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.334991760Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.336767115Z" level=warning msg="a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.345376472Z" level=error msg="a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.345466089Z" level=error msg="Handler for POST /containers/a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Succeeded.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5320 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5380 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5431 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5484 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5606 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5651 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.357716531Z" level=info msg="Starting up"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358705071Z" level=info msg="libcontainerd: started new containerd process" pid=5760
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358745093Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358752714Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358764148Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358776085Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.380842889Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381048071Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381153382Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381341394Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381373098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382099509Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382133036Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382207884Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382343889Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382492242Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382523039Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382534544Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382539128Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382543927Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382602302Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382612387Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382654309Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382666111Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382672928Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382680656Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382687734Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382694863Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382701263Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382708233Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.389869324Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.389970663Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390231181Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390693465Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390761040Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390801006Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390837469Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390872925Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390907221Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390942408Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390995785Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391037296Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391072605Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391123134Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391178849Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391220116Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391256303Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391361636Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391421093Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391461960Z" level=info msg="containerd successfully booted in 0.011317s"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401561237Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401656487Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401675625Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401686993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402365251Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402435965Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402485742Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402521529Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.404394659Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.412908189Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.412971447Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413010977Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413043136Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413074788Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413110787Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413260464Z" level=info msg="Loading containers: start."
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.712631285Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.712801384Z" level=warning msg="7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.724631759Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.724812373Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.732548391Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.747296430Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.752660933Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.753257161Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.766108474Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.767551538Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.767647405Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.771774105Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.772162177Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.800720776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.801655119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.801803393Z" level=warning msg="5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.873636231Z" level=info msg="Removing stale sandbox 19ab4b24e94d5f1d84519382396a488fb8b7b86a1139572b75f9a5cc5113ed7f (583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738)"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.874870887Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 ff2d4960149b5f5b9b113921115e6567b885fc47e1a599612b866ec8dca9af49], retrying...."
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.937441165Z" level=info msg="Removing stale sandbox 8c2f4d4f1d93fc7a21a4a38efacafbd43f7f7a28306fd2c6a42de4d277ba4367 (fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca)"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.938343615Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 5253b408cb6b2304fae32c9c0187ab7c78907a45d825735aa6f2d23c9e3efdef], retrying...."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.000888852Z" level=info msg="Removing stale sandbox 9254ecb5a4b0132e22fae56dd13d339dd05eda35dffb63e526b54387ce7f51e3 (1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a)"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.001992817Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 d78c3a0559afc107d4179951735a7b9b2300698a44a48a3eb90e75478c468213], retrying...."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.009063338Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.015724743Z" level=info msg="Loading containers: done."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.033707897Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.033807358Z" level=info msg="Daemon has completed initialization"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.042977484Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.043076130Z" level=info msg="API listen on [::]:2376"
	Dec 19 19:43:34 running-upgrade-403000 systemd[1]: Started Docker Application Container Engine.
	Dec 19 19:43:34 running-upgrade-403000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.696654080Z" level=info msg="Processing signal 'terminated'"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.809551199Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9/shim.sock" debug=false pid=6368
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.811758467Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9/shim.sock" debug=false pid=6374
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.835667628Z" level=warning msg="27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.835785224Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854616685Z" level=info msg="Daemon shutdown complete"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855533018Z" level=error msg="failed to kill shim" error="context canceled: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855633748Z" level=error msg="failed to kill shim" error="context canceled: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854692854Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854860763Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854878438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854883356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855485662Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855931723Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855942631Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.856018899Z" level=error msg="failed to delete failed start container" container=c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9 error="grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.856095818Z" level=error msg="failed to delete failed start container" container=bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9 error="grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.860381564Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.860966662Z" level=warning msg="Failed detaching sandbox a8ee5906358307b45b48ac369a5715f7806e8c56651d02cd24bd9e0ed7852d80 from endpoint 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861084593Z" level=warning msg="Failed deleting endpoint 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39: endpoint with name k8s_POD_etcd-minikube_kube-system_1fb91b8c854b8eaa7774489e79ebf2bf_1 id 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39 has active containers\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861291172Z" level=warning msg="Failed to delete sandbox a8ee5906358307b45b48ac369a5715f7806e8c56651d02cd24bd9e0ed7852d80 from store: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861163425Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861739804Z" level=warning msg="Failed detaching sandbox 27c04fab1eb7af525b6d545ab6aec2bdcaaf33508a6a9d4a047438f9a08c89ec from endpoint 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861761033Z" level=warning msg="Failed deleting endpoint 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4: endpoint with name k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_2 id 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4 has active containers\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861772463Z" level=warning msg="Failed to delete sandbox 27c04fab1eb7af525b6d545ab6aec2bdcaaf33508a6a9d4a047438f9a08c89ec from store: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.868231786Z" level=error msg="bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.868358810Z" level=error msg="Handler for POST /containers/bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9/start returned error: transport is closing: unavailable"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.870540935Z" level=error msg="c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.870567099Z" level=error msg="Handler for POST /containers/c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9/start returned error: transport is closing: unavailable"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.439850647Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.442406881Z" level=warning msg="7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.451609767Z" level=error msg="7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.451656821Z" level=error msg="Handler for POST /containers/7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.460029699Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.464940545Z" level=warning msg="4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.471716794Z" level=error msg="4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.471760077Z" level=error msg="Handler for POST /containers/4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Succeeded.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 6368 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 6374 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.883744996Z" level=info msg="Starting up"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884860214Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884918064Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884963653Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.885006619Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.885194245Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Logs begin at Tue 2023-12-19 19:41:48 UTC, end at Tue 2023-12-19 19:43:35 UTC. --
	Dec 19 19:41:54 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.677004627Z" level=info msg="Starting up"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.677982556Z" level=info msg="libcontainerd: started new containerd process" pid=1997
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678026180Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678035097Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678046429Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.678056037Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700516666Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700742354Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.700798405Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.701039910Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.701076691Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702029150Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702159140Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702256727Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702391599Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702566940Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702610366Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702668854Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702704987Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.702736156Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711224595Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711318177Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711398228Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711449404Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711486302Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711522422Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711560939Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711596551Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711630858Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711667092Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711786765Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.711876443Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712188909Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712245676Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712295512Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712334541Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712370557Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712404800Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712438525Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712473372Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712517129Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712555723Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712592078Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712652392Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712691284Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712726200Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712760948Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712869880Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712933704Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.712971818Z" level=info msg="containerd successfully booted in 0.013122s"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721773956Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721861232Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721877933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.721887212Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722761465Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722800349Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722822230Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.722833618Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736484339Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736536635Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736545203Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736551991Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736556618Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736560813Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.736708417Z" level=info msg="Loading containers: start."
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.797037790Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.834503434Z" level=info msg="Loading containers: done."
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.849388265Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.849521811Z" level=info msg="Daemon has completed initialization"
	Dec 19 19:41:54 running-upgrade-403000 systemd[1]: Started Docker Application Container Engine.
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.864966722Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 19 19:41:54 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:41:54.865040936Z" level=info msg="API listen on [::]:2376"
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.420134946Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7bb75be40713ad2d5ae1c099978e7bc1daf0a2a7ddce717f3ce688fcdd8eb8a1/shim.sock" debug=false pid=3701
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.422525139Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5c3a9b826a2be93ea41767b1780d6ca52d179ac21d91b6159f90eeeae192fd29/shim.sock" debug=false pid=3702
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.506054209Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6f1e1cde7ccbea076c0ed1958a7fa67e0dc0ca3704a74e9f98df66bd5984d68f/shim.sock" debug=false pid=3746
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.520114696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0a298c8b46bbe8619534bc667786f65c6d2c647c38c982aacf96394f1e666981/shim.sock" debug=false pid=3765
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.540231550Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cae950f5b112fa34a107a8ca7bfa00ac17e37e8189bf275fa80a9c31ac6a2260/shim.sock" debug=false pid=3788
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.708011320Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c/shim.sock" debug=false pid=3924
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.725809407Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30/shim.sock" debug=false pid=3945
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.819095160Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a/shim.sock" debug=false pid=4005
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.836359845Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67/shim.sock" debug=false pid=4013
	Dec 19 19:42:59 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:42:59.882317243Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e/shim.sock" debug=false pid=4053
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.138806412Z" level=info msg="Processing signal 'terminated'"
	Dec 19 19:43:22 running-upgrade-403000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.451935685Z" level=info msg="shim reaped" id=5c3a9b826a2be93ea41767b1780d6ca52d179ac21d91b6159f90eeeae192fd29
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.454102365Z" level=info msg="shim reaped" id=012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.454477275Z" level=info msg="shim reaped" id=25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.462088824Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.464881913Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.464917051Z" level=warning msg="25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/25113a1ca846e9d9dfedb10a1e1db97cc146b7b59ea0955f8b9108f03fa03d67/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.471806262Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.472344235Z" level=warning msg="012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/012a4c4876063bf4ef7df5db0ccdaa78862c92f07f49b58ecf072d543c459d0a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.473408997Z" level=info msg="shim reaped" id=6f1e1cde7ccbea076c0ed1958a7fa67e0dc0ca3704a74e9f98df66bd5984d68f
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.474974525Z" level=info msg="shim reaped" id=0a298c8b46bbe8619534bc667786f65c6d2c647c38c982aacf96394f1e666981
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.475100483Z" level=info msg="shim reaped" id=844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.482981119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.483063996Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.485258483Z" level=warning msg="844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/844106bd951d43a0d6a91ec27c2c9c84572405a4c5bb7f79177f581d0bafcf30/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.486918295Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.489477921Z" level=info msg="shim reaped" id=7bb75be40713ad2d5ae1c099978e7bc1daf0a2a7ddce717f3ce688fcdd8eb8a1
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.500680541Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.510942444Z" level=info msg="shim reaped" id=e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.520217576Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.523427546Z" level=info msg="shim reaped" id=cae950f5b112fa34a107a8ca7bfa00ac17e37e8189bf275fa80a9c31ac6a2260
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.523736307Z" level=warning msg="e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e933148ce679d18d83610ba522191abb634ae260413f52b9040da25762f3ae2e/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:22 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:22.533184800Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:27 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:27.640288988Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca/shim.sock" debug=false pid=5320
	Dec 19 19:43:27 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:27.804573079Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda/shim.sock" debug=false pid=5380
	Dec 19 19:43:28 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:28.256759130Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a/shim.sock" debug=false pid=5431
	Dec 19 19:43:28 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:28.419915625Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68/shim.sock" debug=false pid=5484
	Dec 19 19:43:30 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:30.647972112Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738/shim.sock" debug=false pid=5606
	Dec 19 19:43:30 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:30.802334775Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f/shim.sock" debug=false pid=5651
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.204768698Z" level=info msg="Container aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c failed to exit within 10 seconds of signal 15 - using the force"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.281818738Z" level=info msg="shim reaped" id=aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.292045007Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.292173312Z" level=warning msg="aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/aa2e21a63ea0a9f45c9c9b3028a2b236fd4a169697bdd30064ccc129fbc5765c/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330054585Z" level=info msg="Daemon shutdown complete"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330124793Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330190332Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.330390860Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.334991760Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.336767115Z" level=warning msg="a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.345376472Z" level=error msg="a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:32 running-upgrade-403000 dockerd[1989]: time="2023-12-19T19:43:32.345466089Z" level=error msg="Handler for POST /containers/a3fe9bdeac2381293e4aabd6f03c1dec2ab9e7cac4390479a01325611d76f2e4/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Succeeded.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5320 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5380 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5431 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5484 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5606 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 5651 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:33 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.357716531Z" level=info msg="Starting up"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358705071Z" level=info msg="libcontainerd: started new containerd process" pid=5760
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358745093Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358752714Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358764148Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.358776085Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.380842889Z" level=info msg="starting containerd" revision=b34a5c8af56e510852c35414db4c1f4fa6172339 version=v1.2.10
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381048071Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381153382Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381341394Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.381373098Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382099509Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382133036Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382207884Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382343889Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382492242Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382523039Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382534544Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382539128Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.81\n": exit status 1"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382543927Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382602302Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382612387Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382654309Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382666111Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382672928Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382680656Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382687734Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382694863Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382701263Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.382708233Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.389869324Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.389970663Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390231181Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390693465Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390761040Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390801006Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390837469Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390872925Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390907221Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390942408Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.390995785Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391037296Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391072605Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391123134Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391178849Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391220116Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391256303Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391361636Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391421093Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.391461960Z" level=info msg="containerd successfully booted in 0.011317s"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401561237Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401656487Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401675625Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.401686993Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402365251Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402435965Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402485742Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.402521529Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.404394659Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.412908189Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.412971447Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413010977Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413043136Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413074788Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413110787Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.413260464Z" level=info msg="Loading containers: start."
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.712631285Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.712801384Z" level=warning msg="7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.724631759Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7377977f1504f9233d7122197d0076dd1794c56e12294269bd8b84085adc6bda"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.724812373Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.732548391Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.747296430Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.752660933Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.753257161Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.766108474Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738 path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.767551538Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.767647405Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.771774105Z" level=warning msg="unmount task rootfs" error="no such file or directory" id=fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca path="/var/run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.772162177Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.800720776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.801655119Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.801803393Z" level=warning msg="5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/5f46bb72ba78fc1a0a9d082cbdd8b5fac33f0968469fff537e053b6c2d6cda68/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.873636231Z" level=info msg="Removing stale sandbox 19ab4b24e94d5f1d84519382396a488fb8b7b86a1139572b75f9a5cc5113ed7f (583de9b6889d69d3acfbe726376ac81521437b9546480b3dfa514bc7e59e8738)"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.874870887Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 ff2d4960149b5f5b9b113921115e6567b885fc47e1a599612b866ec8dca9af49], retrying...."
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.937441165Z" level=info msg="Removing stale sandbox 8c2f4d4f1d93fc7a21a4a38efacafbd43f7f7a28306fd2c6a42de4d277ba4367 (fa0416b9bb8142935084933a2958e86a45af6694436c15f1061f021f2520beca)"
	Dec 19 19:43:33 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:33.938343615Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 5253b408cb6b2304fae32c9c0187ab7c78907a45d825735aa6f2d23c9e3efdef], retrying...."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.000888852Z" level=info msg="Removing stale sandbox 9254ecb5a4b0132e22fae56dd13d339dd05eda35dffb63e526b54387ce7f51e3 (1e119af810eb87a73efce6a8f93001295d651120bd43393f7a33032bdf02cb9a)"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.001992817Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 395774905f1e4932ffffb54ea8d7be0d43bcde034d3aa8f0040c183fcd5bf2b5 d78c3a0559afc107d4179951735a7b9b2300698a44a48a3eb90e75478c468213], retrying...."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.009063338Z" level=info msg="There are old running containers, the network config will not take affect"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.015724743Z" level=info msg="Loading containers: done."
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.033707897Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.033807358Z" level=info msg="Daemon has completed initialization"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.042977484Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.043076130Z" level=info msg="API listen on [::]:2376"
	Dec 19 19:43:34 running-upgrade-403000 systemd[1]: Started Docker Application Container Engine.
	Dec 19 19:43:34 running-upgrade-403000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.696654080Z" level=info msg="Processing signal 'terminated'"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.809551199Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9/shim.sock" debug=false pid=6368
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.811758467Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9/shim.sock" debug=false pid=6374
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.835667628Z" level=warning msg="27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f cleanup: failed to unmount IPC: umount /var/lib/docker/containers/27d551551a9608f62c5220c02f4cf5095660439b6f8990f2752397789af6100f/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.835785224Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854616685Z" level=info msg="Daemon shutdown complete"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855533018Z" level=error msg="failed to kill shim" error="context canceled: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855633748Z" level=error msg="failed to kill shim" error="context canceled: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854692854Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854860763Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854878438Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.854883356Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855485662Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855931723Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.855942631Z" level=error msg="stream copy error: reading from a closed fifo"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.856018899Z" level=error msg="failed to delete failed start container" container=c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9 error="grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.856095818Z" level=error msg="failed to delete failed start container" container=bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9 error="grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.860381564Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.860966662Z" level=warning msg="Failed detaching sandbox a8ee5906358307b45b48ac369a5715f7806e8c56651d02cd24bd9e0ed7852d80 from endpoint 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861084593Z" level=warning msg="Failed deleting endpoint 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39: endpoint with name k8s_POD_etcd-minikube_kube-system_1fb91b8c854b8eaa7774489e79ebf2bf_1 id 834328abaa80aa23484d13e6d8328f830acd6b1379ca4ebb44c3ae075d7d3a39 has active containers\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861291172Z" level=warning msg="Failed to delete sandbox a8ee5906358307b45b48ac369a5715f7806e8c56651d02cd24bd9e0ed7852d80 from store: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861163425Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861739804Z" level=warning msg="Failed detaching sandbox 27c04fab1eb7af525b6d545ab6aec2bdcaaf33508a6a9d4a047438f9a08c89ec from endpoint 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861761033Z" level=warning msg="Failed deleting endpoint 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4: endpoint with name k8s_POD_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_2 id 58978a700d00cb994b637a6d272df46bf161468cedcfe305d5744347bfabc9a4 has active containers\n"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.861772463Z" level=warning msg="Failed to delete sandbox 27c04fab1eb7af525b6d545ab6aec2bdcaaf33508a6a9d4a047438f9a08c89ec from store: open : no such file or directory"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.868231786Z" level=error msg="bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.868358810Z" level=error msg="Handler for POST /containers/bae19d985f9ca805defbf7154e111acbc86ba859ac5f6e8784bdfd730516c7d9/start returned error: transport is closing: unavailable"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.870540935Z" level=error msg="c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:34 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:34.870567099Z" level=error msg="Handler for POST /containers/c6fac39cfa0ecfa69479ff82f323d4151399ed1bf06ac3f65c74abe4b0bfa1a9/start returned error: transport is closing: unavailable"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.439850647Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.442406881Z" level=warning msg="7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.451609767Z" level=error msg="7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.451656821Z" level=error msg="Handler for POST /containers/7acd255f7d5518a556c139fa018fd356a25c821b44ba1b4c3bdb575ef042c69a/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.460029699Z" level=warning msg="failed to get endpoint_count map for scope local: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.464940545Z" level=warning msg="4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5/mounts/shm, flags: 0x2: no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.471716794Z" level=error msg="4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5 cleanup: failed to delete container from containerd: grpc: the client connection is closing: unknown"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[5753]: time="2023-12-19T19:43:35.471760077Z" level=error msg="Handler for POST /containers/4a57c78f77974dd222f1b1bffeb4119ed1811519e3a80fb6457bd5c6f1efb2f5/start returned error: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Succeeded.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 6368 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Found left-over process 6374 (containerd-shim) in control group while starting unit. Ignoring.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Starting Docker Application Container Engine...
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.883744996Z" level=info msg="Starting up"
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884860214Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884918064Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.884963653Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.885006619Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: time="2023-12-19T19:43:35.885194245Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///run/containerd/containerd.sock 0  <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused\". Reconnecting..." module=grpc
	Dec 19 19:43:35 running-upgrade-403000 dockerd[6434]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/containerd/containerd.sock: connect: connection refused": unavailable
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 19 19:43:35 running-upgrade-403000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1219 11:43:35.798480   24302 out.go:239] * 
	* 
	W1219 11:43:35.799152   24302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 11:43:35.861190   24302 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-403000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-19 11:43:35.934951 -0800 PST m=+2483.686514086
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-403000 -n running-upgrade-403000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-403000 -n running-upgrade-403000: exit status 6 (135.924371ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 11:43:36.063361   24347 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-403000" does not appear in /Users/jenkins/minikube-integration/17837-20429/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-403000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-403000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-403000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-403000: (1.479685826s)
--- FAIL: TestRunningBinaryUpgrade (126.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (16.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-924000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-924000 --driver=hyperkit : exit status 90 (16.155609179s)

                                                
                                                
-- stdout --
	* [NoKubernetes-924000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node NoKubernetes-924000 in cluster NoKubernetes-924000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:43:48 UTC, ends at Tue 2023-12-19 19:43:53 UTC. --
	Dec 19 19:43:49 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:43:49 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:43:51 NoKubernetes-924000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:43:51 NoKubernetes-924000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:43:51 NoKubernetes-924000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:43:51 NoKubernetes-924000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:43:51 NoKubernetes-924000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:43:53 NoKubernetes-924000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:43:53 NoKubernetes-924000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:43:53 NoKubernetes-924000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:43:53 NoKubernetes-924000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:43:53 NoKubernetes-924000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-924000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000: exit status 6 (143.369045ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 11:43:54.341757   24376 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-924000" does not appear in /Users/jenkins/minikube-integration/17837-20429/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-924000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (16.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --driver=hyperkit : (3.579449315s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-924000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-924000 status -o json: exit status 6 (141.227346ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-924000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	E1219 11:43:58.063119   24392 status.go:415] kubeconfig endpoint: extract IP: "NoKubernetes-924000" does not appear in /Users/jenkins/minikube-integration/17837-20429/kubeconfig

                                                
                                                
** /stderr **
no_kubernetes_test.go:203: failed to run minikube status with json output. args "out/minikube-darwin-amd64 -p NoKubernetes-924000 status -o json" : exit status 6
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-924000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-924000: (2.32874922s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000: exit status 85 (124.392026ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-924000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-924000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-924000" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-924000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-924000\"")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-924000 -n NoKubernetes-924000: exit status 85 (120.930285ms)

                                                
                                                
-- stdout --
	* Profile "NoKubernetes-924000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p NoKubernetes-924000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "NoKubernetes-924000" host is not running, skipping log retrieval (state="* Profile \"NoKubernetes-924000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p NoKubernetes-924000\"")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (6.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (15.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : exit status 90 (15.729177324s)

                                                
                                                
-- stdout --
	* [false-377000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node false-377000 in cluster false-377000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:48:20.043094   25260 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:48:20.043553   25260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:48:20.043561   25260 out.go:309] Setting ErrFile to fd 2...
	I1219 11:48:20.043565   25260 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:48:20.043767   25260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:48:20.045549   25260 out.go:303] Setting JSON to false
	I1219 11:48:20.073580   25260 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8270,"bootTime":1703007030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:48:20.073702   25260 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:48:20.098597   25260 out.go:177] * [false-377000] minikube v1.32.0 on Darwin 14.2
	I1219 11:48:20.177848   25260 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:48:20.143784   25260 notify.go:220] Checking for updates...
	I1219 11:48:20.220746   25260 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:48:20.275747   25260 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:48:20.341588   25260 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:48:20.388668   25260 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:48:20.450437   25260 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:48:20.479339   25260 config.go:182] Loaded profile config "custom-flannel-377000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:48:20.479438   25260 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:48:20.524918   25260 out.go:177] * Using the hyperkit driver based on user configuration
	I1219 11:48:20.575748   25260 start.go:298] selected driver: hyperkit
	I1219 11:48:20.575765   25260 start.go:902] validating driver "hyperkit" against <nil>
	I1219 11:48:20.575782   25260 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:48:20.579418   25260 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:48:20.579521   25260 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:48:20.587457   25260 install.go:137] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:48:20.592263   25260 install.go:79] stdout: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:48:20.592303   25260 install.go:81] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit looks good
	I1219 11:48:20.592338   25260 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1219 11:48:20.593215   25260 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 11:48:20.593276   25260 cni.go:84] Creating CNI manager for "false"
	I1219 11:48:20.593291   25260 start_flags.go:323] config:
	{Name:false-377000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-377000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:48:20.593480   25260 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:48:20.616688   25260 out.go:177] * Starting control plane node false-377000 in cluster false-377000
	I1219 11:48:20.637782   25260 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:48:20.637855   25260 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1219 11:48:20.637888   25260 cache.go:56] Caching tarball of preloaded images
	I1219 11:48:20.638097   25260 preload.go:174] Found /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 11:48:20.638114   25260 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1219 11:48:20.638258   25260 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/false-377000/config.json ...
	I1219 11:48:20.638291   25260 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/false-377000/config.json: {Name:mkbd46d9c2b682a90ec7fa4006834ef4717565cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 11:48:20.640476   25260 start.go:365] acquiring machines lock for false-377000: {Name:mkc3d80bc77e215fa21f0c59378bcbfaf828d0a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 11:48:20.640552   25260 start.go:369] acquired machines lock for "false-377000" in 60.739µs
	I1219 11:48:20.640583   25260 start.go:93] Provisioning new machine with config: &{Name:false-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:false-377000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1219 11:48:20.640680   25260 start.go:125] createHost starting for "" (driver="hyperkit")
	I1219 11:48:20.692646   25260 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1219 11:48:20.692907   25260 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:48:20.692945   25260 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:48:20.701340   25260 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60254
	I1219 11:48:20.701757   25260 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:48:20.702241   25260 main.go:141] libmachine: Using API Version  1
	I1219 11:48:20.702264   25260 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:48:20.702507   25260 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:48:20.702617   25260 main.go:141] libmachine: (false-377000) Calling .GetMachineName
	I1219 11:48:20.702709   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:20.702815   25260 start.go:159] libmachine.API.Create for "false-377000" (driver="hyperkit")
	I1219 11:48:20.702845   25260 client.go:168] LocalClient.Create starting
	I1219 11:48:20.702878   25260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem
	I1219 11:48:20.702930   25260 main.go:141] libmachine: Decoding PEM data...
	I1219 11:48:20.702946   25260 main.go:141] libmachine: Parsing certificate...
	I1219 11:48:20.702997   25260 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem
	I1219 11:48:20.703033   25260 main.go:141] libmachine: Decoding PEM data...
	I1219 11:48:20.703046   25260 main.go:141] libmachine: Parsing certificate...
	I1219 11:48:20.703058   25260 main.go:141] libmachine: Running pre-create checks...
	I1219 11:48:20.703068   25260 main.go:141] libmachine: (false-377000) Calling .PreCreateCheck
	I1219 11:48:20.703146   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:20.703302   25260 main.go:141] libmachine: (false-377000) Calling .GetConfigRaw
	I1219 11:48:20.703725   25260 main.go:141] libmachine: Creating machine...
	I1219 11:48:20.703734   25260 main.go:141] libmachine: (false-377000) Calling .Create
	I1219 11:48:20.703822   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:20.703969   25260 main.go:141] libmachine: (false-377000) DBG | I1219 11:48:20.703804   25268 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:48:20.704018   25260 main.go:141] libmachine: (false-377000) Downloading /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17837-20429/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1219 11:48:20.869778   25260 main.go:141] libmachine: (false-377000) DBG | I1219 11:48:20.869719   25268 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/id_rsa...
	I1219 11:48:21.105104   25260 main.go:141] libmachine: (false-377000) DBG | I1219 11:48:21.105035   25268 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/false-377000.rawdisk...
	I1219 11:48:21.105122   25260 main.go:141] libmachine: (false-377000) DBG | Writing magic tar header
	I1219 11:48:21.105132   25260 main.go:141] libmachine: (false-377000) DBG | Writing SSH key tar header
	I1219 11:48:21.105493   25260 main.go:141] libmachine: (false-377000) DBG | I1219 11:48:21.105439   25268 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000 ...
	I1219 11:48:21.441444   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:21.441464   25260 main.go:141] libmachine: (false-377000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/hyperkit.pid
	I1219 11:48:21.441475   25260 main.go:141] libmachine: (false-377000) DBG | Using UUID 8ff40b30-9ea7-11ee-bd07-149d997f80ea
	I1219 11:48:21.471313   25260 main.go:141] libmachine: (false-377000) DBG | Generated MAC 56:7d:57:2e:c0:bb
	I1219 11:48:21.471335   25260 main.go:141] libmachine: (false-377000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-377000
	I1219 11:48:21.471363   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8ff40b30-9ea7-11ee-bd07-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1219 11:48:21.471396   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8ff40b30-9ea7-11ee-bd07-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00009f1d0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1219 11:48:21.471430   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8ff40b30-9ea7-11ee-bd07-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/false-377000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/bzimage,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines
/false-377000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-377000"}
	I1219 11:48:21.471459   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8ff40b30-9ea7-11ee-bd07-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/false-377000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/console-ring -f kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/bzimage,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/initrd,earlyprintk=serial loglevel=3 console=tty
S0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-377000"
	I1219 11:48:21.471472   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1219 11:48:21.474446   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 DEBUG: hyperkit: Pid is 25270
	I1219 11:48:21.475029   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 0
	I1219 11:48:21.475045   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:21.475115   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:21.476279   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:21.476494   25260 main.go:141] libmachine: (false-377000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I1219 11:48:21.476516   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:21.476537   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:21.476560   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:21.476574   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:21.476599   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:21.476607   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:21.476617   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:21.476625   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:21.476648   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:21.476665   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:21.476679   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:21.476696   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:21.476711   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:21.476724   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:21.476738   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:21.476750   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:21.476763   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:21.476782   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:21.476803   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:21.476821   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:21.476835   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:21.476844   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:21.476854   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:21.476865   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:21.476873   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:21.476881   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:21.476894   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:21.476903   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:21.476914   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:21.476922   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:21.476931   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:21.476941   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:21.476952   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:21.476960   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:21.476975   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:21.482332   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1219 11:48:21.491177   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1219 11:48:21.491960   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:48:21.491976   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:48:21.491984   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:48:21.491990   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:48:21.889164   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1219 11:48:21.889182   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1219 11:48:21.993183   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:48:21.993204   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:48:21.993220   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:48:21.993239   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:48:21.994056   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1219 11:48:21.994066   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:21 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1219 11:48:23.478177   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 1
	I1219 11:48:23.478194   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:23.478288   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:23.479218   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:23.479390   25260 main.go:141] libmachine: (false-377000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I1219 11:48:23.479399   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:23.479438   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:23.479446   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:23.479453   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:23.479461   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:23.479473   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:23.479489   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:23.479500   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:23.479509   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:23.479519   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:23.479528   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:23.479536   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:23.479551   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:23.479574   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:23.479626   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:23.479641   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:23.479666   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:23.479703   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:23.479716   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:23.479725   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:23.479734   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:23.479742   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:23.479753   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:23.479761   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:23.479784   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:23.479812   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:23.479818   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:23.479838   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:23.479860   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:23.479887   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:23.479893   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:23.479905   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:23.479914   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:23.479921   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:23.479933   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:25.481308   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 2
	I1219 11:48:25.481327   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:25.481413   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:25.482487   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:25.482610   25260 main.go:141] libmachine: (false-377000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I1219 11:48:25.482620   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:25.482630   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:25.482667   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:25.482683   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:25.482694   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:25.482701   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:25.482712   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:25.482722   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:25.482731   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:25.482740   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:25.482756   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:25.482763   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:25.482771   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:25.482779   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:25.482802   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:25.482822   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:25.482836   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:25.482849   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:25.482870   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:25.482881   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:25.482889   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:25.482898   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:25.482906   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:25.482918   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:25.482926   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:25.482935   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:25.482943   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:25.482951   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:25.482959   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:25.482968   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:25.482976   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:25.482985   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:25.482993   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:25.483001   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:25.483012   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:27.097141   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:27 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1219 11:48:27.097217   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:27 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1219 11:48:27.097229   25260 main.go:141] libmachine: (false-377000) DBG | 2023/12/19 11:48:27 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1219 11:48:27.482947   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 3
	I1219 11:48:27.482966   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:27.483042   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:27.483918   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:27.483994   25260 main.go:141] libmachine: (false-377000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I1219 11:48:27.484003   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:27.484015   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:27.484025   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:27.484032   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:27.484041   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:27.484050   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:27.484057   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:27.484065   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:27.484073   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:27.484093   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:27.484105   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:27.484119   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:27.484135   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:27.484155   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:27.484167   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:27.484175   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:27.484187   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:27.484198   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:27.484206   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:27.484214   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:27.484222   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:27.484241   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:27.484255   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:27.484264   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:27.484273   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:27.484286   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:27.484301   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:27.484311   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:27.484319   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:27.484327   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:27.484336   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:27.484349   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:27.484360   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:27.484369   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:27.484378   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:29.484206   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 4
	I1219 11:48:29.484227   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:29.484303   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:29.485216   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:29.485330   25260 main.go:141] libmachine: (false-377000) DBG | Found 35 entries in /var/db/dhcpd_leases!
	I1219 11:48:29.485351   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:29.485362   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:29.485374   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:29.485384   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:29.485406   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:29.485417   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:29.485424   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:29.485452   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:29.485462   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:29.485470   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:29.485478   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:29.485485   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:29.485494   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:29.485504   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:29.485514   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:29.485522   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:29.485531   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:29.485539   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:29.485547   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:29.485555   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:29.485563   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:29.485571   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:29.485580   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:29.485589   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:29.485598   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:29.485613   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:29.485626   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:29.485636   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:29.485645   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:29.485654   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:29.485663   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:29.485670   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:29.485688   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:29.485701   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:29.485715   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:31.486020   25260 main.go:141] libmachine: (false-377000) DBG | Attempt 5
	I1219 11:48:31.486045   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:31.486107   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:31.486969   25260 main.go:141] libmachine: (false-377000) DBG | Searching for 56:7d:57:2e:c0:bb in /var/db/dhcpd_leases ...
	I1219 11:48:31.487045   25260 main.go:141] libmachine: (false-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:48:31.487057   25260 main.go:141] libmachine: (false-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:48:31.487068   25260 main.go:141] libmachine: (false-377000) DBG | Found match: 56:7d:57:2e:c0:bb
	I1219 11:48:31.487075   25260 main.go:141] libmachine: (false-377000) DBG | IP: 192.168.172.21
	I1219 11:48:31.487131   25260 main.go:141] libmachine: (false-377000) Calling .GetConfigRaw
	I1219 11:48:31.487762   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:31.487866   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:31.487957   25260 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1219 11:48:31.487970   25260 main.go:141] libmachine: (false-377000) Calling .GetState
	I1219 11:48:31.488061   25260 main.go:141] libmachine: (false-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:31.488110   25260 main.go:141] libmachine: (false-377000) DBG | hyperkit pid from json: 25270
	I1219 11:48:31.488980   25260 main.go:141] libmachine: Detecting operating system of created instance...
	I1219 11:48:31.488991   25260 main.go:141] libmachine: Waiting for SSH to be available...
	I1219 11:48:31.488997   25260 main.go:141] libmachine: Getting to WaitForSSH function...
	I1219 11:48:31.489003   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.489087   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:31.489168   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.489235   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.489319   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:31.489447   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:31.489780   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:31.489788   25260 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1219 11:48:31.551668   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:48:31.551682   25260 main.go:141] libmachine: Detecting the provisioner...
	I1219 11:48:31.551693   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.551823   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:31.551909   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.552011   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.552113   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:31.552246   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:31.552501   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:31.552509   25260 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1219 11:48:31.615964   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1219 11:48:31.616040   25260 main.go:141] libmachine: found compatible host: buildroot
	I1219 11:48:31.616048   25260 main.go:141] libmachine: Provisioning with buildroot...
	I1219 11:48:31.616054   25260 main.go:141] libmachine: (false-377000) Calling .GetMachineName
	I1219 11:48:31.616186   25260 buildroot.go:166] provisioning hostname "false-377000"
	I1219 11:48:31.616194   25260 main.go:141] libmachine: (false-377000) Calling .GetMachineName
	I1219 11:48:31.616293   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.616391   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:31.616478   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.616566   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.616689   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:31.616821   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:31.617068   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:31.617079   25260 main.go:141] libmachine: About to run SSH command:
	sudo hostname false-377000 && echo "false-377000" | sudo tee /etc/hostname
	I1219 11:48:31.688166   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: false-377000
	
	I1219 11:48:31.688186   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.688330   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:31.688430   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.688559   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.688647   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:31.688785   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:31.689046   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:31.689059   25260 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-377000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-377000/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-377000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 11:48:31.755996   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:48:31.756018   25260 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17837-20429/.minikube CaCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17837-20429/.minikube}
	I1219 11:48:31.756029   25260 buildroot.go:174] setting up certificates
	I1219 11:48:31.756040   25260 provision.go:83] configureAuth start
	I1219 11:48:31.756048   25260 main.go:141] libmachine: (false-377000) Calling .GetMachineName
	I1219 11:48:31.756188   25260 main.go:141] libmachine: (false-377000) Calling .GetIP
	I1219 11:48:31.756278   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.756372   25260 provision.go:138] copyHostCerts
	I1219 11:48:31.756458   25260 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem, removing ...
	I1219 11:48:31.756468   25260 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem
	I1219 11:48:31.756866   25260 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem (1082 bytes)
	I1219 11:48:31.757088   25260 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem, removing ...
	I1219 11:48:31.757095   25260 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem
	I1219 11:48:31.757178   25260 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem (1123 bytes)
	I1219 11:48:31.757342   25260 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem, removing ...
	I1219 11:48:31.757349   25260 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem
	I1219 11:48:31.757422   25260 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem (1679 bytes)
	I1219 11:48:31.757560   25260 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem org=jenkins.false-377000 san=[192.168.172.21 192.168.172.21 localhost 127.0.0.1 minikube false-377000]
	I1219 11:48:31.966740   25260 provision.go:172] copyRemoteCerts
	I1219 11:48:31.966809   25260 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 11:48:31.966837   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:31.966998   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:31.967100   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:31.967200   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:31.967278   25260 sshutil.go:53] new ssh client: &{IP:192.168.172.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/id_rsa Username:docker}
	I1219 11:48:32.005744   25260 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1219 11:48:32.021766   25260 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 11:48:32.038932   25260 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 11:48:32.055883   25260 provision.go:86] duration metric: configureAuth took 299.827524ms
	I1219 11:48:32.055897   25260 buildroot.go:189] setting minikube options for container-runtime
	I1219 11:48:32.056043   25260 config.go:182] Loaded profile config "false-377000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:48:32.056056   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.056195   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.056282   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.056374   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.056453   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.056586   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.056715   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:32.056970   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:32.056979   25260 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1219 11:48:32.121857   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1219 11:48:32.121871   25260 buildroot.go:70] root file system type: tmpfs
	I1219 11:48:32.121949   25260 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1219 11:48:32.121967   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.122135   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.122234   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.122329   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.122431   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.122584   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:32.122831   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:32.122877   25260 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1219 11:48:32.194982   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1219 11:48:32.195009   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.195145   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.195237   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.195324   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.195415   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.195535   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:32.195798   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:32.195812   25260 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1219 11:48:32.773029   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1219 11:48:32.773044   25260 main.go:141] libmachine: Checking connection to Docker...
	I1219 11:48:32.773051   25260 main.go:141] libmachine: (false-377000) Calling .GetURL
	I1219 11:48:32.773188   25260 main.go:141] libmachine: Docker is up and running!
	I1219 11:48:32.773195   25260 main.go:141] libmachine: Reticulating splines...
	I1219 11:48:32.773200   25260 client.go:171] LocalClient.Create took 12.070241828s
	I1219 11:48:32.773212   25260 start.go:167] duration metric: libmachine.API.Create for "false-377000" took 12.070290794s
	I1219 11:48:32.773222   25260 start.go:300] post-start starting for "false-377000" (driver="hyperkit")
	I1219 11:48:32.773234   25260 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 11:48:32.773243   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.773375   25260 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 11:48:32.773387   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.773495   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.773592   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.773682   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.773772   25260 sshutil.go:53] new ssh client: &{IP:192.168.172.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/id_rsa Username:docker}
	I1219 11:48:32.811373   25260 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 11:48:32.813965   25260 info.go:137] Remote host: Buildroot 2021.02.12
	I1219 11:48:32.813977   25260 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/addons for local assets ...
	I1219 11:48:32.814060   25260 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/files for local assets ...
	I1219 11:48:32.814577   25260 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem -> 208672.pem in /etc/ssl/certs
	I1219 11:48:32.814791   25260 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 11:48:32.820883   25260 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem --> /etc/ssl/certs/208672.pem (1708 bytes)
	I1219 11:48:32.837792   25260 start.go:303] post-start completed in 64.560833ms
	I1219 11:48:32.837821   25260 main.go:141] libmachine: (false-377000) Calling .GetConfigRaw
	I1219 11:48:32.838388   25260 main.go:141] libmachine: (false-377000) Calling .GetIP
	I1219 11:48:32.838541   25260 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/false-377000/config.json ...
	I1219 11:48:32.838855   25260 start.go:128] duration metric: createHost completed in 12.198048219s
	I1219 11:48:32.838873   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.838956   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.839061   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.839150   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.839229   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.839342   25260 main.go:141] libmachine: Using SSH client type: native
	I1219 11:48:32.839587   25260 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.21 22 <nil> <nil>}
	I1219 11:48:32.839599   25260 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1219 11:48:32.903072   25260 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703015311.988447186
	
	I1219 11:48:32.903086   25260 fix.go:206] guest clock: 1703015311.988447186
	I1219 11:48:32.903092   25260 fix.go:219] Guest: 2023-12-19 11:48:31.988447186 -0800 PST Remote: 2023-12-19 11:48:32.838866 -0800 PST m=+12.845850909 (delta=-850.418814ms)
	I1219 11:48:32.903119   25260 fix.go:190] guest clock delta is within tolerance: -850.418814ms
	I1219 11:48:32.903123   25260 start.go:83] releasing machines lock for "false-377000", held for 12.262453832s
	I1219 11:48:32.903146   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.903311   25260 main.go:141] libmachine: (false-377000) Calling .GetIP
	I1219 11:48:32.903431   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.903742   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.903852   25260 main.go:141] libmachine: (false-377000) Calling .DriverName
	I1219 11:48:32.903999   25260 ssh_runner.go:195] Run: cat /version.json
	I1219 11:48:32.904013   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.904120   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.904215   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.904287   25260 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 11:48:32.904300   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.904332   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHHostname
	I1219 11:48:32.904410   25260 sshutil.go:53] new ssh client: &{IP:192.168.172.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/id_rsa Username:docker}
	I1219 11:48:32.904440   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHPort
	I1219 11:48:32.904541   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHKeyPath
	I1219 11:48:32.904659   25260 main.go:141] libmachine: (false-377000) Calling .GetSSHUsername
	I1219 11:48:32.904752   25260 sshutil.go:53] new ssh client: &{IP:192.168.172.21 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/false-377000/id_rsa Username:docker}
	I1219 11:48:32.938450   25260 ssh_runner.go:195] Run: systemctl --version
	I1219 11:48:32.945372   25260 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 11:48:32.991378   25260 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 11:48:32.991478   25260 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1219 11:48:32.997996   25260 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1219 11:48:33.007791   25260 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 11:48:33.007824   25260 start.go:475] detecting cgroup driver to use...
	I1219 11:48:33.007941   25260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:48:33.024302   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1219 11:48:33.032124   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 11:48:33.038792   25260 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 11:48:33.038846   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 11:48:33.045527   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:48:33.052585   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 11:48:33.060818   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:48:33.067865   25260 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 11:48:33.075279   25260 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 11:48:33.082562   25260 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 11:48:33.089149   25260 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 11:48:33.095572   25260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:48:33.187902   25260 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 11:48:33.201081   25260 start.go:475] detecting cgroup driver to use...
	I1219 11:48:33.201167   25260 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1219 11:48:33.213632   25260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:48:33.231297   25260 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 11:48:33.244844   25260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:48:33.253960   25260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:48:33.263039   25260 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 11:48:33.293288   25260 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:48:33.302693   25260 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:48:33.316808   25260 ssh_runner.go:195] Run: which cri-dockerd
	I1219 11:48:33.319378   25260 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1219 11:48:33.326227   25260 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1219 11:48:33.338389   25260 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1219 11:48:33.438274   25260 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1219 11:48:33.547380   25260 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1219 11:48:33.547452   25260 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1219 11:48:33.560567   25260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:48:33.656716   25260 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1219 11:48:35.120013   25260 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.463251727s)
	I1219 11:48:35.120079   25260 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:48:35.205714   25260 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1219 11:48:35.305120   25260 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:48:35.398186   25260 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:48:35.502673   25260 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1219 11:48:35.514294   25260 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1219 11:48:35.548021   25260 out.go:177] 
	W1219 11:48:35.568379   25260 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:48:28 UTC, ends at Tue 2023-12-19 19:48:34 UTC. --
	Dec 19 19:48:29 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:48:29 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:48:31 false-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:48:34 false-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:48:34 false-377000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:48:28 UTC, ends at Tue 2023-12-19 19:48:34 UTC. --
	Dec 19 19:48:29 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:48:29 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:48:31 false-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:48:31 false-377000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:48:34 false-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:48:34 false-377000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:48:34 false-377000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1219 11:48:35.568419   25260 out.go:239] * 
	* 
	W1219 11:48:35.569575   25260 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 11:48:35.636600   25260 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/false/Start (15.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (15.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : exit status 90 (15.219042002s)

                                                
                                                
-- stdout --
	* [enable-default-cni-377000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node enable-default-cni-377000 in cluster enable-default-cni-377000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:48:51.403491   25488 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:48:51.403785   25488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:48:51.403791   25488 out.go:309] Setting ErrFile to fd 2...
	I1219 11:48:51.403795   25488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:48:51.403984   25488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:48:51.405543   25488 out.go:303] Setting JSON to false
	I1219 11:48:51.442232   25488 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8301,"bootTime":1703007030,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:48:51.442329   25488 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:48:51.527226   25488 out.go:177] * [enable-default-cni-377000] minikube v1.32.0 on Darwin 14.2
	I1219 11:48:51.571129   25488 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:48:51.550543   25488 notify.go:220] Checking for updates...
	I1219 11:48:51.613071   25488 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:48:51.634041   25488 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:48:51.655115   25488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:48:51.676114   25488 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:48:51.697286   25488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:48:51.718571   25488 config.go:182] Loaded profile config "custom-flannel-377000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:48:51.719096   25488 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:48:51.749290   25488 out.go:177] * Using the hyperkit driver based on user configuration
	I1219 11:48:51.770028   25488 start.go:298] selected driver: hyperkit
	I1219 11:48:51.770060   25488 start.go:902] validating driver "hyperkit" against <nil>
	I1219 11:48:51.770082   25488 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:48:51.774740   25488 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:48:51.775267   25488 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:48:51.783125   25488 install.go:137] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:48:51.787858   25488 install.go:79] stdout: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:48:51.787885   25488 install.go:81] /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit looks good
	I1219 11:48:51.787923   25488 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	E1219 11:48:51.788110   25488 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1219 11:48:51.788129   25488 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 11:48:51.788199   25488 cni.go:84] Creating CNI manager for "bridge"
	I1219 11:48:51.788207   25488 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 11:48:51.788216   25488 start_flags.go:323] config:
	{Name:enable-default-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-377000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:48:51.788364   25488 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:48:51.831154   25488 out.go:177] * Starting control plane node enable-default-cni-377000 in cluster enable-default-cni-377000
	I1219 11:48:51.852031   25488 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:48:51.852080   25488 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1219 11:48:51.852105   25488 cache.go:56] Caching tarball of preloaded images
	I1219 11:48:51.852276   25488 preload.go:174] Found /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 11:48:51.852289   25488 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1219 11:48:51.852404   25488 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/enable-default-cni-377000/config.json ...
	I1219 11:48:51.852427   25488 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/enable-default-cni-377000/config.json: {Name:mk1b4331f14eaf0fbc1d9c8095e8fd97c0314b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 11:48:51.853172   25488 start.go:365] acquiring machines lock for enable-default-cni-377000: {Name:mkc3d80bc77e215fa21f0c59378bcbfaf828d0a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 11:48:51.853262   25488 start.go:369] acquired machines lock for "enable-default-cni-377000" in 67.158µs
	I1219 11:48:51.853296   25488 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-377000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1219 11:48:51.853362   25488 start.go:125] createHost starting for "" (driver="hyperkit")
	I1219 11:48:51.875198   25488 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1219 11:48:51.875492   25488 main.go:141] libmachine: Found binary path at /Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit
	I1219 11:48:51.875540   25488 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:48:51.883958   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60544
	I1219 11:48:51.884335   25488 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:48:51.884762   25488 main.go:141] libmachine: Using API Version  1
	I1219 11:48:51.884774   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:48:51.885008   25488 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:48:51.885113   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetMachineName
	I1219 11:48:51.885190   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:48:51.885286   25488 start.go:159] libmachine.API.Create for "enable-default-cni-377000" (driver="hyperkit")
	I1219 11:48:51.885315   25488 client.go:168] LocalClient.Create starting
	I1219 11:48:51.885344   25488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem
	I1219 11:48:51.885393   25488 main.go:141] libmachine: Decoding PEM data...
	I1219 11:48:51.885410   25488 main.go:141] libmachine: Parsing certificate...
	I1219 11:48:51.885473   25488 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem
	I1219 11:48:51.885508   25488 main.go:141] libmachine: Decoding PEM data...
	I1219 11:48:51.885520   25488 main.go:141] libmachine: Parsing certificate...
	I1219 11:48:51.885534   25488 main.go:141] libmachine: Running pre-create checks...
	I1219 11:48:51.885542   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .PreCreateCheck
	I1219 11:48:51.885614   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:51.885784   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetConfigRaw
	I1219 11:48:51.912625   25488 main.go:141] libmachine: Creating machine...
	I1219 11:48:51.912663   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .Create
	I1219 11:48:51.912844   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:51.913109   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | I1219 11:48:51.912812   25496 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:48:51.913246   25488 main.go:141] libmachine: (enable-default-cni-377000) Downloading /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17837-20429/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso...
	I1219 11:48:52.081292   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | I1219 11:48:52.081225   25496 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/id_rsa...
	I1219 11:48:52.209382   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | I1219 11:48:52.209318   25496 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/enable-default-cni-377000.rawdisk...
	I1219 11:48:52.209403   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Writing magic tar header
	I1219 11:48:52.209417   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Writing SSH key tar header
	I1219 11:48:52.209949   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | I1219 11:48:52.209920   25496 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000 ...
	I1219 11:48:52.548447   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:52.548467   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/hyperkit.pid
	I1219 11:48:52.548479   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Using UUID a28a1f8c-9ea7-11ee-8797-149d997f80ea
	I1219 11:48:52.577734   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Generated MAC 2:f4:81:7d:a0:8c
	I1219 11:48:52.577753   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-377000
	I1219 11:48:52.577784   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a28a1f8c-9ea7-11ee-8797-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1219 11:48:52.577831   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a28a1f8c-9ea7-11ee-8797-149d997f80ea", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Argume
nts:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1219 11:48:52.577878   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a28a1f8c-9ea7-11ee-8797-149d997f80ea", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/enable-default-cni-377000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machi
nes/enable-default-cni-377000/bzimage,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-377000"}
	I1219 11:48:52.577930   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a28a1f8c-9ea7-11ee-8797-149d997f80ea -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/enable-default-cni-377000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/tty,log=/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/console-ring -f kexec,/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/bzimage,/Users/jenkins/minikube-
integration/17837-20429/.minikube/machines/enable-default-cni-377000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=enable-default-cni-377000"
	I1219 11:48:52.577946   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1219 11:48:52.580673   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 DEBUG: hyperkit: Pid is 25497
	I1219 11:48:52.581081   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 0
	I1219 11:48:52.581095   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:52.581162   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:48:52.582289   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:48:52.582392   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:48:52.582403   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:48:52.582413   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:52.582420   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:52.582448   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:52.582470   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:52.582479   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:52.582490   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:52.582504   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:52.582543   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:52.582557   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:52.582601   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:52.582615   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:52.582626   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:52.582636   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:52.582646   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:52.582657   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:52.582667   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:52.582680   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:52.582688   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:52.582700   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:52.582716   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:52.582729   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:52.582737   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:52.582747   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:52.582761   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:52.582774   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:52.582785   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:52.582794   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:52.582802   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:52.582810   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:52.582824   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:52.582838   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:52.582849   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:52.582859   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:52.582869   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:52.582876   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:52.588134   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1219 11:48:52.596949   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1219 11:48:52.597695   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:48:52.597715   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:48:52.597752   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:48:52.597768   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:48:52.993922   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1219 11:48:52.993939   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:52 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1219 11:48:53.097995   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1219 11:48:53.098017   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1219 11:48:53.098052   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1219 11:48:53.098065   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1219 11:48:53.098930   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1219 11:48:53.098940   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1219 11:48:54.582833   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 1
	I1219 11:48:54.582855   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:54.582938   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:48:54.583865   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:48:54.583930   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:48:54.583939   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:48:54.583950   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:54.583957   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:54.583976   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:54.583990   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:54.583999   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:54.584009   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:54.584027   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:54.584035   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:54.584042   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:54.584051   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:54.584059   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:54.584066   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:54.584076   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:54.584085   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:54.584098   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:54.584108   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:54.584116   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:54.584124   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:54.584132   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:54.584141   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:54.584148   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:54.584157   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:54.584165   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:54.584175   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:54.584185   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:54.584196   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:54.584205   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:54.584214   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:54.584222   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:54.584231   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:54.584239   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:54.584250   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:54.584258   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:54.584266   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:54.584279   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:56.585116   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 2
	I1219 11:48:56.585136   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:56.585214   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:48:56.586124   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:48:56.586179   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:48:56.586190   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:48:56.586204   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:56.586212   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:56.586220   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:56.586229   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:56.586237   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:56.586246   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:56.586257   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:56.586264   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:56.586293   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:56.586301   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:56.586308   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:56.586318   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:56.586327   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:56.586336   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:56.586344   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:56.586355   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:56.586363   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:56.586371   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:56.586379   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:56.586388   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:56.586396   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:56.586404   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:56.586411   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:56.586420   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:56.586428   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:56.586437   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:56.586444   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:56.586453   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:56.586461   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:56.586469   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:56.586477   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:56.586486   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:56.586496   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:56.586509   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:56.586537   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:48:58.147925   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:58 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1219 11:48:58.147940   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:58 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1219 11:48:58.147948   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | 2023/12/19 11:48:58 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1219 11:48:58.587561   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 3
	I1219 11:48:58.587583   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:48:58.587661   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:48:58.588617   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:48:58.588695   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:48:58.588704   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:48:58.588715   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:48:58.588723   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:48:58.588731   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:48:58.588740   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:48:58.588748   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:48:58.588760   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:48:58.588783   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:48:58.588797   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:48:58.588812   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:48:58.588822   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:48:58.588845   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:48:58.588859   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:48:58.588868   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:48:58.588877   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:48:58.588885   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:48:58.588894   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:48:58.588902   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:48:58.588909   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:48:58.588916   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:48:58.588926   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:48:58.588963   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:48:58.588981   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:48:58.588992   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:48:58.589001   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:48:58.589011   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:48:58.589033   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:48:58.589050   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:48:58.589060   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:48:58.589072   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:48:58.589081   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:48:58.589090   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:48:58.589098   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:48:58.589106   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:48:58.589114   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:48:58.589124   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:49:00.589883   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 4
	I1219 11:49:00.589902   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:49:00.590018   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:49:00.591017   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:49:00.591118   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I1219 11:49:00.591138   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.21 HWAddress:56:7d:57:2e:c0:bb ID:1,56:7d:57:2e:c0:bb Lease:0x6583450e}
	I1219 11:49:00.591154   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.20 HWAddress:da:4a:dc:78:e0:14 ID:1,da:4a:dc:78:e0:14 Lease:0x658344e0}
	I1219 11:49:00.591169   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.19 HWAddress:fa:a6:e7:68:3a:69 ID:1,fa:a6:e7:68:3a:69 Lease:0x65834492}
	I1219 11:49:00.591185   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.18 HWAddress:22:82:cd:2a:22:3d ID:1,22:82:cd:2a:22:3d Lease:0x6583447b}
	I1219 11:49:00.591198   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.17 HWAddress:3a:6b:88:a7:27:2f ID:1,3a:6b:88:a7:27:2f Lease:0x6583443a}
	I1219 11:49:00.591210   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.16 HWAddress:6a:25:e3:16:c9:c7 ID:1,6a:25:e3:16:c9:c7 Lease:0x6583441c}
	I1219 11:49:00.591231   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.15 HWAddress:3e:d6:4e:5f:23:74 ID:1,3e:d6:4e:5f:23:74 Lease:0x6581f2af}
	I1219 11:49:00.591247   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.14 HWAddress:ce:a0:6d:38:84:7d ID:1,ce:a0:6d:38:84:7d Lease:0x6581f27f}
	I1219 11:49:00.591262   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.13 HWAddress:76:18:77:23:3a:1d ID:1,76:18:77:23:3a:1d Lease:0x6583437d}
	I1219 11:49:00.591287   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.12 HWAddress:da:cb:14:cf:24:8 ID:1,da:cb:14:cf:24:8 Lease:0x658343d6}
	I1219 11:49:00.591302   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.11 HWAddress:7a:6e:b6:b4:4f:79 ID:1,7a:6e:b6:b4:4f:79 Lease:0x6583430d}
	I1219 11:49:00.591355   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.10 HWAddress:fe:5c:36:6e:db:bd ID:1,fe:5c:36:6e:db:bd Lease:0x6581f135}
	I1219 11:49:00.591365   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.9 HWAddress:82:7c:ea:83:b6:c ID:1,82:7c:ea:83:b6:c Lease:0x65834282}
	I1219 11:49:00.591382   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.8 HWAddress:6e:6:46:71:16:99 ID:1,6e:6:46:71:16:99 Lease:0x6581f110}
	I1219 11:49:00.591396   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.7 HWAddress:a2:e7:b7:b9:1e:19 ID:1,a2:e7:b7:b9:1e:19 Lease:0x65834258}
	I1219 11:49:00.591412   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.6 HWAddress:ce:b1:fb:59:7e:60 ID:1,ce:b1:fb:59:7e:60 Lease:0x6581f0e9}
	I1219 11:49:00.591426   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.5 HWAddress:a2:7a:97:db:57:59 ID:1,a2:7a:97:db:57:59 Lease:0x65834231}
	I1219 11:49:00.591437   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.4 HWAddress:c6:de:9b:84:3b:e1 ID:1,c6:de:9b:84:3b:e1 Lease:0x658341b6}
	I1219 11:49:00.591447   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.3 HWAddress:c2:67:44:b6:8d:bd ID:1,c2:67:44:b6:8d:bd Lease:0x65834144}
	I1219 11:49:00.591459   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.2 HWAddress:ee:ca:fe:53:30:8c ID:1,ee:ca:fe:53:30:8c Lease:0x6583410e}
	I1219 11:49:00.591473   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.14 HWAddress:aa:d8:b2:65:7c:b5 ID:1,aa:d8:b2:65:7c:b5 Lease:0x6581ef05}
	I1219 11:49:00.591485   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.171.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65834081}
	I1219 11:49:00.591498   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.13 HWAddress:c2:ef:6:97:35:2a ID:1,c2:ef:6:97:35:2a Lease:0x6581ee60}
	I1219 11:49:00.591510   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.12 HWAddress:e2:96:5f:3b:13:ad ID:1,e2:96:5f:3b:13:ad Lease:0x6583402e}
	I1219 11:49:00.591522   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.11 HWAddress:42:53:72:6b:ee:58 ID:1,42:53:72:6b:ee:58 Lease:0x65833ffb}
	I1219 11:49:00.591530   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.10 HWAddress:a:8e:92:28:75:7c ID:1,a:8e:92:28:75:7c Lease:0x6581ecc6}
	I1219 11:49:00.591540   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.9 HWAddress:4e:4:0:c3:5b:7a ID:1,4e:4:0:c3:5b:7a Lease:0x6581ecb0}
	I1219 11:49:00.591553   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.8 HWAddress:4e:3a:a0:81:67:3f ID:1,4e:3a:a0:81:67:3f Lease:0x65833dea}
	I1219 11:49:00.591576   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.7 HWAddress:e2:df:2a:fe:b4:e4 ID:1,e2:df:2a:fe:b4:e4 Lease:0x65833dc6}
	I1219 11:49:00.591590   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.6 HWAddress:fe:92:2d:52:cd:1d ID:1,fe:92:2d:52:cd:1d Lease:0x65833d88}
	I1219 11:49:00.591615   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.5 HWAddress:b2:a1:8:ad:dc:eb ID:1,b2:a1:8:ad:dc:eb Lease:0x65833d09}
	I1219 11:49:00.591638   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.4 HWAddress:2:d3:db:c8:4e:cf ID:1,2:d3:db:c8:4e:cf Lease:0x65833cee}
	I1219 11:49:00.591648   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.3 HWAddress:4a:f4:1d:72:d3:b4 ID:1,4a:f4:1d:72:d3:b4 Lease:0x65833bfb}
	I1219 11:49:00.591657   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.170.2 HWAddress:36:4b:38:10:56:4e ID:1,36:4b:38:10:56:4e Lease:0x6581ea70}
	I1219 11:49:00.591664   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name: IPAddress:192.168.169.2 HWAddress:9e:d9:1a:67:9d:a4 ID:1,9e:d9:1a:67:9d:a4 Lease:0x65833bc4}
	I1219 11:49:00.591689   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.169.3 HWAddress:12:52:b:2c:b3:ec ID:1,12:52:b:2c:b3:ec Lease:0x65833a8a}
	I1219 11:49:02.591670   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Attempt 5
	I1219 11:49:02.591693   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:49:02.591781   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:49:02.592741   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Searching for 2:f4:81:7d:a0:8c in /var/db/dhcpd_leases ...
	I1219 11:49:02.592837   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I1219 11:49:02.592851   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.172.22 HWAddress:2:f4:81:7d:a0:8c ID:1,2:f4:81:7d:a0:8c Lease:0x6583452d}
	I1219 11:49:02.592867   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | Found match: 2:f4:81:7d:a0:8c
	I1219 11:49:02.592885   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | IP: 192.168.172.22
	I1219 11:49:02.592938   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetConfigRaw
	I1219 11:49:02.593562   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:02.593672   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:02.593773   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1219 11:49:02.593787   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetState
	I1219 11:49:02.593875   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | exe=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin/docker-machine-driver-hyperkit uid=0
	I1219 11:49:02.593935   25488 main.go:141] libmachine: (enable-default-cni-377000) DBG | hyperkit pid from json: 25497
	I1219 11:49:02.594905   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I1219 11:49:02.594918   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I1219 11:49:02.594926   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I1219 11:49:02.594934   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.595048   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:02.595135   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.595222   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.595344   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:02.595507   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:02.595828   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:02.595837   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1219 11:49:02.664000   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:49:02.664014   25488 main.go:141] libmachine: Detecting the provisioner...
	I1219 11:49:02.664020   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.664150   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:02.664255   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.664338   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.664422   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:02.664546   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:02.664801   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:02.664810   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1219 11:49:02.732755   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gae27a7b-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1219 11:49:02.732826   25488 main.go:141] libmachine: found compatible host: buildroot
	I1219 11:49:02.732837   25488 main.go:141] libmachine: Provisioning with buildroot...
	I1219 11:49:02.732844   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetMachineName
	I1219 11:49:02.732994   25488 buildroot.go:166] provisioning hostname "enable-default-cni-377000"
	I1219 11:49:02.733007   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetMachineName
	I1219 11:49:02.733104   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.733197   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:02.733292   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.733407   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.733506   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:02.733628   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:02.733879   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:02.733889   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-377000 && echo "enable-default-cni-377000" | sudo tee /etc/hostname
	I1219 11:49:02.812305   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-377000
	
	I1219 11:49:02.812324   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.812459   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:02.812558   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.812666   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.812780   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:02.812936   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:02.813201   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:02.813223   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-377000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-377000/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-377000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 11:49:02.887071   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1219 11:49:02.887091   25488 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17837-20429/.minikube CaCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17837-20429/.minikube}
	I1219 11:49:02.887106   25488 buildroot.go:174] setting up certificates
	I1219 11:49:02.887117   25488 provision.go:83] configureAuth start
	I1219 11:49:02.887124   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetMachineName
	I1219 11:49:02.887270   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetIP
	I1219 11:49:02.887365   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.887452   25488 provision.go:138] copyHostCerts
	I1219 11:49:02.887535   25488 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem, removing ...
	I1219 11:49:02.887551   25488 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem
	I1219 11:49:02.887719   25488 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/ca.pem (1082 bytes)
	I1219 11:49:02.887962   25488 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem, removing ...
	I1219 11:49:02.887968   25488 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem
	I1219 11:49:02.888091   25488 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/cert.pem (1123 bytes)
	I1219 11:49:02.888282   25488 exec_runner.go:144] found /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem, removing ...
	I1219 11:49:02.888288   25488 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem
	I1219 11:49:02.888364   25488 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17837-20429/.minikube/key.pem (1679 bytes)
	I1219 11:49:02.888514   25488 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-377000 san=[192.168.172.22 192.168.172.22 localhost 127.0.0.1 minikube enable-default-cni-377000]
	I1219 11:49:02.963952   25488 provision.go:172] copyRemoteCerts
	I1219 11:49:02.964008   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 11:49:02.964027   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:02.964176   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:02.964284   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:02.964373   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:02.964468   25488 sshutil.go:53] new ssh client: &{IP:192.168.172.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/id_rsa Username:docker}
	I1219 11:49:03.005697   25488 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 11:49:03.026257   25488 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 11:49:03.042814   25488 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 11:49:03.059641   25488 provision.go:86] duration metric: configureAuth took 172.50318ms
	I1219 11:49:03.059658   25488 buildroot.go:189] setting minikube options for container-runtime
	I1219 11:49:03.059804   25488 config.go:182] Loaded profile config "enable-default-cni-377000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:49:03.059818   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.059973   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.060070   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.060161   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.060266   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.060370   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.060493   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:03.060770   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:03.060780   25488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1219 11:49:03.130105   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1219 11:49:03.130117   25488 buildroot.go:70] root file system type: tmpfs
	I1219 11:49:03.130192   25488 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1219 11:49:03.130206   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.130354   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.130435   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.130512   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.130592   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.130703   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:03.130954   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:03.131000   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1219 11:49:03.208932   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1219 11:49:03.208961   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.209141   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.209246   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.209371   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.209486   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.209622   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:03.209914   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:03.209929   25488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1219 11:49:03.773888   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1219 11:49:03.773927   25488 main.go:141] libmachine: Checking connection to Docker...
	I1219 11:49:03.773948   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetURL
	I1219 11:49:03.774114   25488 main.go:141] libmachine: Docker is up and running!
	I1219 11:49:03.774130   25488 main.go:141] libmachine: Reticulating splines...
	I1219 11:49:03.774140   25488 client.go:171] LocalClient.Create took 11.888713767s
	I1219 11:49:03.774172   25488 start.go:167] duration metric: libmachine.API.Create for "enable-default-cni-377000" took 11.888779436s
	I1219 11:49:03.774190   25488 start.go:300] post-start starting for "enable-default-cni-377000" (driver="hyperkit")
	I1219 11:49:03.774210   25488 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 11:49:03.774226   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.774388   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 11:49:03.774400   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.774532   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.774637   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.774758   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.774879   25488 sshutil.go:53] new ssh client: &{IP:192.168.172.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/id_rsa Username:docker}
	I1219 11:49:03.815528   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 11:49:03.818590   25488 info.go:137] Remote host: Buildroot 2021.02.12
	I1219 11:49:03.818606   25488 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/addons for local assets ...
	I1219 11:49:03.818698   25488 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17837-20429/.minikube/files for local assets ...
	I1219 11:49:03.818896   25488 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem -> 208672.pem in /etc/ssl/certs
	I1219 11:49:03.819123   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 11:49:03.825549   25488 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/ssl/certs/208672.pem --> /etc/ssl/certs/208672.pem (1708 bytes)
	I1219 11:49:03.843619   25488 start.go:303] post-start completed in 69.415359ms
	I1219 11:49:03.843660   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetConfigRaw
	I1219 11:49:03.844367   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetIP
	I1219 11:49:03.844592   25488 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/enable-default-cni-377000/config.json ...
	I1219 11:49:03.845106   25488 start.go:128] duration metric: createHost completed in 11.991630378s
	I1219 11:49:03.845125   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.845303   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.845423   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.845611   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.845719   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.845851   25488 main.go:141] libmachine: Using SSH client type: native
	I1219 11:49:03.846104   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1406660] 0x1409340 <nil>  [] 0s} 192.168.172.22 22 <nil> <nil>}
	I1219 11:49:03.846114   25488 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1219 11:49:03.915111   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1703015342.895339809
	
	I1219 11:49:03.915127   25488 fix.go:206] guest clock: 1703015342.895339809
	I1219 11:49:03.915133   25488 fix.go:219] Guest: 2023-12-19 11:49:02.895339809 -0800 PST Remote: 2023-12-19 11:49:03.845117 -0800 PST m=+12.486952793 (delta=-949.777191ms)
	I1219 11:49:03.915150   25488 fix.go:190] guest clock delta is within tolerance: -949.777191ms
	I1219 11:49:03.915154   25488 start.go:83] releasing machines lock for "enable-default-cni-377000", held for 12.061776522s
	I1219 11:49:03.915172   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.915308   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetIP
	I1219 11:49:03.915410   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.915744   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.915851   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .DriverName
	I1219 11:49:03.915940   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 11:49:03.915984   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.916009   25488 ssh_runner.go:195] Run: cat /version.json
	I1219 11:49:03.916021   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHHostname
	I1219 11:49:03.916096   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.916120   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHPort
	I1219 11:49:03.916209   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.916225   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHKeyPath
	I1219 11:49:03.916320   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.916337   25488 main.go:141] libmachine: (enable-default-cni-377000) Calling .GetSSHUsername
	I1219 11:49:03.916418   25488 sshutil.go:53] new ssh client: &{IP:192.168.172.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/id_rsa Username:docker}
	I1219 11:49:03.916442   25488 sshutil.go:53] new ssh client: &{IP:192.168.172.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/enable-default-cni-377000/id_rsa Username:docker}
	I1219 11:49:03.999125   25488 ssh_runner.go:195] Run: systemctl --version
	I1219 11:49:04.002881   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 11:49:04.007686   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 11:49:04.007739   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 11:49:04.018876   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 11:49:04.018893   25488 start.go:475] detecting cgroup driver to use...
	I1219 11:49:04.019065   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:49:04.033835   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1219 11:49:04.041500   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 11:49:04.048949   25488 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 11:49:04.049006   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 11:49:04.057205   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:49:04.065230   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 11:49:04.074431   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 11:49:04.082118   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 11:49:04.089726   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 11:49:04.097131   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 11:49:04.104277   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 11:49:04.112211   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:49:04.197240   25488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 11:49:04.208583   25488 start.go:475] detecting cgroup driver to use...
	I1219 11:49:04.208653   25488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1219 11:49:04.225671   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:49:04.235374   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1219 11:49:04.249406   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1219 11:49:04.258831   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:49:04.268964   25488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 11:49:04.294582   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 11:49:04.304035   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 11:49:04.316585   25488 ssh_runner.go:195] Run: which cri-dockerd
	I1219 11:49:04.319166   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1219 11:49:04.325897   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1219 11:49:04.337745   25488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1219 11:49:04.434714   25488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1219 11:49:04.526656   25488 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1219 11:49:04.526735   25488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1219 11:49:04.538000   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:49:04.641272   25488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1219 11:49:05.988795   25488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.347494581s)
	I1219 11:49:05.988861   25488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:49:06.079312   25488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1219 11:49:06.168980   25488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1219 11:49:06.263474   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 11:49:06.363440   25488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1219 11:49:06.375552   25488 ssh_runner.go:195] Run: sudo journalctl --no-pager -u cri-docker.socket
	I1219 11:49:06.407334   25488 out.go:177] 
	W1219 11:49:06.453122   25488 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:48:59 UTC, ends at Tue 2023-12-19 19:49:05 UTC. --
	Dec 19 19:49:00 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:49:00 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Tue 2023-12-19 19:48:59 UTC, ends at Tue 2023-12-19 19:49:05 UTC. --
	Dec 19 19:49:00 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:49:00 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 19 19:49:02 enable-default-cni-377000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 19 19:49:05 enable-default-cni-377000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	W1219 11:49:06.453141   25488 out.go:239] * 
	* 
	W1219 11:49:06.453811   25488 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1219 11:49:06.539767   25488 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (15.24s)

                                                
                                    

Test pass (286/314)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 41.72
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.28.4/json-events 14.8
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.3
17 TestDownloadOnly/v1.29.0-rc.2/json-events 15.6
18 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.32
23 TestDownloadOnly/DeleteAll 0.39
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.37
26 TestBinaryMirror 1.01
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
32 TestAddons/Setup 199.28
34 TestAddons/parallel/Registry 18.67
35 TestAddons/parallel/Ingress 20.95
36 TestAddons/parallel/InspektorGadget 10.47
37 TestAddons/parallel/MetricsServer 5.47
38 TestAddons/parallel/HelmTiller 10.3
40 TestAddons/parallel/CSI 38.6
41 TestAddons/parallel/Headlamp 14.2
42 TestAddons/parallel/CloudSpanner 6.37
43 TestAddons/parallel/LocalPath 53.44
44 TestAddons/parallel/NvidiaDevicePlugin 5.37
47 TestAddons/serial/GCPAuth/Namespaces 0.09
48 TestAddons/StoppedEnableDisable 5.75
49 TestCertOptions 37.83
50 TestCertExpiration 241.57
51 TestDockerFlags 38.33
52 TestForceSystemdFlag 52.18
53 TestForceSystemdEnv 45.53
56 TestHyperKitDriverInstallOrUpdate 11.35
59 TestErrorSpam/setup 34.73
60 TestErrorSpam/start 1.35
61 TestErrorSpam/status 0.48
62 TestErrorSpam/pause 1.3
63 TestErrorSpam/unpause 1.25
64 TestErrorSpam/stop 3.69
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 50.4
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.47
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 8.71
76 TestFunctional/serial/CacheCmd/cache/add_local 1.44
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
78 TestFunctional/serial/CacheCmd/cache/list 0.08
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.38
81 TestFunctional/serial/CacheCmd/cache/delete 0.16
82 TestFunctional/serial/MinikubeKubectlCmd 0.54
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
84 TestFunctional/serial/ExtraConfig 36.87
85 TestFunctional/serial/ComponentHealth 0.05
86 TestFunctional/serial/LogsCmd 2.68
87 TestFunctional/serial/LogsFileCmd 2.88
88 TestFunctional/serial/InvalidService 4.88
90 TestFunctional/parallel/ConfigCmd 0.51
91 TestFunctional/parallel/DashboardCmd 16.77
92 TestFunctional/parallel/DryRun 1.07
93 TestFunctional/parallel/InternationalLanguage 0.73
94 TestFunctional/parallel/StatusCmd 0.5
98 TestFunctional/parallel/ServiceCmdConnect 17.36
99 TestFunctional/parallel/AddonsCmd 0.26
100 TestFunctional/parallel/PersistentVolumeClaim 29.57
102 TestFunctional/parallel/SSHCmd 0.31
103 TestFunctional/parallel/CpCmd 0.97
104 TestFunctional/parallel/MySQL 24.24
105 TestFunctional/parallel/FileSync 0.19
106 TestFunctional/parallel/CertSync 0.95
110 TestFunctional/parallel/NodeLabels 0.05
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
114 TestFunctional/parallel/License 0.46
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.16
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
126 TestFunctional/parallel/ServiceCmd/DeployApp 7.12
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
128 TestFunctional/parallel/ProfileCmd/profile_list 0.29
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
130 TestFunctional/parallel/MountCmd/any-port 10.3
131 TestFunctional/parallel/ServiceCmd/List 0.39
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
134 TestFunctional/parallel/ServiceCmd/Format 0.27
135 TestFunctional/parallel/ServiceCmd/URL 0.25
136 TestFunctional/parallel/MountCmd/specific-port 1.83
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
138 TestFunctional/parallel/Version/short 0.1
139 TestFunctional/parallel/Version/components 0.48
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.18
144 TestFunctional/parallel/ImageCommands/ImageBuild 5.87
145 TestFunctional/parallel/ImageCommands/Setup 4.62
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.43
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.02
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.33
149 TestFunctional/parallel/DockerEnv/bash 0.83
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.77
154 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.54
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.11
157 TestFunctional/delete_addon-resizer_images 0.13
158 TestFunctional/delete_my-image_image 0.05
159 TestFunctional/delete_minikube_cached_images 0.05
165 TestIngressAddonLegacy/StartLegacyK8sCluster 73.89
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 19.24
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.54
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.85
172 TestJSONOutput/start/Command 50.7
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.48
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.44
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 8.18
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.79
200 TestMainNoArgs 0.08
201 TestMinikubeProfile 83.11
204 TestMountStart/serial/StartWithMountFirst 15.9
205 TestMountStart/serial/VerifyMountFirst 0.31
206 TestMountStart/serial/StartWithMountSecond 16.03
207 TestMountStart/serial/VerifyMountSecond 0.3
208 TestMountStart/serial/DeleteFirst 2.36
209 TestMountStart/serial/VerifyMountPostDelete 0.29
210 TestMountStart/serial/Stop 2.24
211 TestMountStart/serial/RestartStopped 16.72
212 TestMountStart/serial/VerifyMountPostStop 0.3
215 TestMultiNode/serial/FreshStart2Nodes 160.07
216 TestMultiNode/serial/DeployApp2Nodes 8.22
217 TestMultiNode/serial/PingHostFrom2Pods 0.86
218 TestMultiNode/serial/AddNode 37.58
219 TestMultiNode/serial/MultiNodeLabels 0.05
220 TestMultiNode/serial/ProfileList 0.21
221 TestMultiNode/serial/CopyFile 5.56
222 TestMultiNode/serial/StopNode 2.74
223 TestMultiNode/serial/StartAfterStop 27.49
224 TestMultiNode/serial/RestartKeepsNodes 164.06
225 TestMultiNode/serial/DeleteNode 2.99
226 TestMultiNode/serial/StopMultiNode 16.5
227 TestMultiNode/serial/RestartMultiNode 103.11
228 TestMultiNode/serial/ValidateNameConflict 45.18
232 TestPreload 167.95
234 TestScheduledStopUnix 113.6
235 TestSkaffold 123.14
240 TestKubernetesUpgrade 148.35
253 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 5.19
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.48
262 TestStoppedBinaryUpgrade/Setup 1.27
263 TestStoppedBinaryUpgrade/Upgrade 190.01
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.5
268 TestNoKubernetes/serial/Start 19.04
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.67
271 TestPause/serial/Start 50.77
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
273 TestNoKubernetes/serial/ProfileList 0.45
274 TestNoKubernetes/serial/Stop 2.24
275 TestNoKubernetes/serial/StartNoArgs 23.74
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.14
277 TestNetworkPlugins/group/auto/Start 50.94
278 TestPause/serial/SecondStartNoReconfiguration 36.43
279 TestNetworkPlugins/group/auto/KubeletFlags 0.16
280 TestNetworkPlugins/group/auto/NetCatPod 18.22
281 TestPause/serial/Pause 0.54
282 TestPause/serial/VerifyStatus 0.17
283 TestPause/serial/Unpause 0.54
284 TestPause/serial/PauseAgain 0.67
285 TestPause/serial/DeletePaused 5.28
286 TestPause/serial/VerifyDeletedResources 0.23
287 TestNetworkPlugins/group/kindnet/Start 58.23
288 TestNetworkPlugins/group/auto/DNS 0.15
289 TestNetworkPlugins/group/auto/Localhost 0.11
290 TestNetworkPlugins/group/auto/HairPin 0.12
291 TestNetworkPlugins/group/calico/Start 83.15
292 TestNetworkPlugins/group/kindnet/ControllerPod 6
293 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
294 TestNetworkPlugins/group/kindnet/NetCatPod 17.23
295 TestNetworkPlugins/group/kindnet/DNS 0.13
296 TestNetworkPlugins/group/kindnet/Localhost 0.11
297 TestNetworkPlugins/group/kindnet/HairPin 0.11
298 TestNetworkPlugins/group/custom-flannel/Start 63.64
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.18
301 TestNetworkPlugins/group/calico/NetCatPod 15.22
302 TestNetworkPlugins/group/calico/DNS 0.14
303 TestNetworkPlugins/group/calico/Localhost 0.12
304 TestNetworkPlugins/group/calico/HairPin 0.12
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 16.21
309 TestNetworkPlugins/group/custom-flannel/DNS 0.12
310 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
311 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
312 TestNetworkPlugins/group/flannel/Start 59.47
313 TestNetworkPlugins/group/bridge/Start 56.83
314 TestNetworkPlugins/group/flannel/ControllerPod 6.01
315 TestNetworkPlugins/group/flannel/KubeletFlags 0.17
316 TestNetworkPlugins/group/flannel/NetCatPod 16.24
317 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
318 TestNetworkPlugins/group/bridge/NetCatPod 15.21
319 TestNetworkPlugins/group/flannel/DNS 0.15
320 TestNetworkPlugins/group/flannel/Localhost 0.12
321 TestNetworkPlugins/group/flannel/HairPin 0.11
322 TestNetworkPlugins/group/bridge/DNS 0.12
323 TestNetworkPlugins/group/bridge/Localhost 0.11
324 TestNetworkPlugins/group/bridge/HairPin 0.1
325 TestNetworkPlugins/group/kubenet/Start 52.06
327 TestStartStop/group/old-k8s-version/serial/FirstStart 157.81
328 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
329 TestNetworkPlugins/group/kubenet/NetCatPod 16.24
330 TestNetworkPlugins/group/kubenet/DNS 0.14
331 TestNetworkPlugins/group/kubenet/Localhost 0.11
332 TestNetworkPlugins/group/kubenet/HairPin 0.1
334 TestStartStop/group/no-preload/serial/FirstStart 68.65
335 TestStartStop/group/no-preload/serial/DeployApp 11.63
336 TestStartStop/group/old-k8s-version/serial/DeployApp 12.28
337 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
338 TestStartStop/group/no-preload/serial/Stop 8.28
339 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
340 TestStartStop/group/old-k8s-version/serial/Stop 8.23
341 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.35
342 TestStartStop/group/no-preload/serial/SecondStart 296.44
343 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
344 TestStartStop/group/old-k8s-version/serial/SecondStart 498.49
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.17
348 TestStartStop/group/no-preload/serial/Pause 2
350 TestStartStop/group/embed-certs/serial/FirstStart 51.14
351 TestStartStop/group/embed-certs/serial/DeployApp 11.3
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
353 TestStartStop/group/embed-certs/serial/Stop 8.25
354 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
355 TestStartStop/group/embed-certs/serial/SecondStart 309.65
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.18
359 TestStartStop/group/old-k8s-version/serial/Pause 1.89
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.3
362 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.83
364 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.26
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 299.15
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 23
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
370 TestStartStop/group/embed-certs/serial/Pause 1.97
372 TestStartStop/group/newest-cni/serial/FirstStart 47.01
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
375 TestStartStop/group/newest-cni/serial/Stop 8.23
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
377 TestStartStop/group/newest-cni/serial/SecondStart 38.88
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
381 TestStartStop/group/newest-cni/serial/Pause 1.87
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.18
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.93
x
+
TestDownloadOnly/v1.16.0/json-events (41.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (41.717352922s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (41.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-301000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-301000: exit status 85 (313.430193ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:02 PST |          |
	|         | -p download-only-301000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/19 11:02:12
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 11:02:12.330996   20869 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:02:12.331296   20869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:02:12.331301   20869 out.go:309] Setting ErrFile to fd 2...
	I1219 11:02:12.331305   20869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:02:12.331481   20869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	W1219 11:02:12.331579   20869 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: no such file or directory
	I1219 11:02:12.333277   20869 out.go:303] Setting JSON to true
	I1219 11:02:12.355471   20869 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5502,"bootTime":1703007030,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:02:12.355559   20869 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:02:12.377630   20869 out.go:97] [download-only-301000] minikube v1.32.0 on Darwin 14.2
	I1219 11:02:12.402272   20869 out.go:169] MINIKUBE_LOCATION=17837
	I1219 11:02:12.377873   20869 notify.go:220] Checking for updates...
	W1219 11:02:12.377875   20869 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball: no such file or directory
	I1219 11:02:12.445331   20869 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:02:12.487397   20869 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:02:12.529239   20869 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:02:12.572175   20869 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	W1219 11:02:12.615407   20869 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 11:02:12.615911   20869 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:02:12.646174   20869 out.go:97] Using the hyperkit driver based on user configuration
	I1219 11:02:12.646229   20869 start.go:298] selected driver: hyperkit
	I1219 11:02:12.646243   20869 start.go:902] validating driver "hyperkit" against <nil>
	I1219 11:02:12.646473   20869 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:02:12.646725   20869 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:02:12.783439   20869 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:02:12.787388   20869 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:02:12.787418   20869 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1219 11:02:12.787481   20869 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1219 11:02:12.790194   20869 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1219 11:02:12.790331   20869 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 11:02:12.790380   20869 cni.go:84] Creating CNI manager for ""
	I1219 11:02:12.790394   20869 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1219 11:02:12.790403   20869 start_flags.go:323] config:
	{Name:download-only-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:02:12.790684   20869 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:02:12.812422   20869 out.go:97] Downloading VM boot image ...
	I1219 11:02:12.812558   20869 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/iso/amd64/minikube-v1.32.1-1702708929-17806-amd64.iso
	I1219 11:02:17.382827   20869 out.go:97] Starting control plane node download-only-301000 in cluster download-only-301000
	I1219 11:02:17.382866   20869 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1219 11:02:17.442082   20869 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1219 11:02:17.442149   20869 cache.go:56] Caching tarball of preloaded images
	I1219 11:02:17.442524   20869 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1219 11:02:17.464473   20869 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1219 11:02:17.464524   20869 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:02:17.547910   20869 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1219 11:02:23.425088   20869 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:02:23.425276   20869 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:02:23.972590   20869 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1219 11:02:23.972843   20869 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/download-only-301000/config.json ...
	I1219 11:02:23.972865   20869 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/download-only-301000/config.json: {Name:mk73950d540a4a10e8b6dbd7927c49859cc10696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 11:02:23.973138   20869 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1219 11:02:23.973430   20869 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-301000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (14.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperkit : (14.80317168s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (14.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-301000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-301000: exit status 85 (296.041151ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:02 PST |          |
	|         | -p download-only-301000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:02 PST |          |
	|         | -p download-only-301000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/19 11:02:54
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 11:02:54.364218   20892 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:02:54.364402   20892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:02:54.364409   20892 out.go:309] Setting ErrFile to fd 2...
	I1219 11:02:54.364413   20892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:02:54.364592   20892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	W1219 11:02:54.364687   20892 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: no such file or directory
	I1219 11:02:54.365905   20892 out.go:303] Setting JSON to true
	I1219 11:02:54.388540   20892 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5544,"bootTime":1703007030,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:02:54.388628   20892 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:02:54.410481   20892 out.go:97] [download-only-301000] minikube v1.32.0 on Darwin 14.2
	I1219 11:02:54.431399   20892 out.go:169] MINIKUBE_LOCATION=17837
	I1219 11:02:54.410590   20892 notify.go:220] Checking for updates...
	I1219 11:02:54.473083   20892 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:02:54.515397   20892 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:02:54.557288   20892 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:02:54.599235   20892 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	W1219 11:02:54.642337   20892 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 11:02:54.642767   20892 config.go:182] Loaded profile config "download-only-301000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1219 11:02:54.642808   20892 start.go:810] api.Load failed for download-only-301000: filestore "download-only-301000": Docker machine "download-only-301000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1219 11:02:54.642891   20892 driver.go:392] Setting default libvirt URI to qemu:///system
	W1219 11:02:54.642910   20892 start.go:810] api.Load failed for download-only-301000: filestore "download-only-301000": Docker machine "download-only-301000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1219 11:02:54.671194   20892 out.go:97] Using the hyperkit driver based on existing profile
	I1219 11:02:54.671243   20892 start.go:298] selected driver: hyperkit
	I1219 11:02:54.671255   20892 start.go:902] validating driver "hyperkit" against &{Name:download-only-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:02:54.671601   20892 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:02:54.671761   20892 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:02:54.680726   20892 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:02:54.684484   20892 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:02:54.684506   20892 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1219 11:02:54.687261   20892 cni.go:84] Creating CNI manager for ""
	I1219 11:02:54.687281   20892 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1219 11:02:54.687302   20892 start_flags.go:323] config:
	{Name:download-only-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-301000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:02:54.687459   20892 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:02:54.708345   20892 out.go:97] Starting control plane node download-only-301000 in cluster download-only-301000
	I1219 11:02:54.708358   20892 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:02:54.765858   20892 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1219 11:02:54.765885   20892 cache.go:56] Caching tarball of preloaded images
	I1219 11:02:54.766160   20892 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:02:54.787705   20892 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1219 11:02:54.787736   20892 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:02:54.868362   20892 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1219 11:03:02.311780   20892 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:03:02.311981   20892 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:03:02.935474   20892 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1219 11:03:02.935565   20892 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/download-only-301000/config.json ...
	I1219 11:03:02.935940   20892 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1219 11:03:02.936155   20892 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/darwin/amd64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-301000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (15.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-301000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperkit : (15.600370029s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (15.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-301000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-301000: exit status 85 (314.971769ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:02 PST |          |
	|         | -p download-only-301000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:02 PST |          |
	|         | -p download-only-301000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-301000 | jenkins | v1.32.0 | 19 Dec 23 11:03 PST |          |
	|         | -p download-only-301000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=hyperkit                 |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/19 11:03:09
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.21.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 11:03:09.469769   20909 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:03:09.470097   20909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:03:09.470102   20909 out.go:309] Setting ErrFile to fd 2...
	I1219 11:03:09.470142   20909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:03:09.470324   20909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	W1219 11:03:09.470472   20909 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17837-20429/.minikube/config/config.json: no such file or directory
	I1219 11:03:09.471818   20909 out.go:303] Setting JSON to true
	I1219 11:03:09.494489   20909 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5559,"bootTime":1703007030,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:03:09.494600   20909 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:03:09.516436   20909 out.go:97] [download-only-301000] minikube v1.32.0 on Darwin 14.2
	I1219 11:03:09.537888   20909 out.go:169] MINIKUBE_LOCATION=17837
	I1219 11:03:09.516651   20909 notify.go:220] Checking for updates...
	I1219 11:03:09.581095   20909 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:03:09.623122   20909 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:03:09.645146   20909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:03:09.666782   20909 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	W1219 11:03:09.710109   20909 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 11:03:09.710914   20909 config.go:182] Loaded profile config "download-only-301000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1219 11:03:09.710996   20909 start.go:810] api.Load failed for download-only-301000: filestore "download-only-301000": Docker machine "download-only-301000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1219 11:03:09.711144   20909 driver.go:392] Setting default libvirt URI to qemu:///system
	W1219 11:03:09.711187   20909 start.go:810] api.Load failed for download-only-301000: filestore "download-only-301000": Docker machine "download-only-301000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1219 11:03:09.740981   20909 out.go:97] Using the hyperkit driver based on existing profile
	I1219 11:03:09.741032   20909 start.go:298] selected driver: hyperkit
	I1219 11:03:09.741045   20909 start.go:902] validating driver "hyperkit" against &{Name:download-only-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-301000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:03:09.741367   20909 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:03:09.741579   20909 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17837-20429/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1219 11:03:09.750592   20909 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.32.0
	I1219 11:03:09.754516   20909 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:03:09.754541   20909 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1219 11:03:09.757301   20909 cni.go:84] Creating CNI manager for ""
	I1219 11:03:09.757321   20909 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1219 11:03:09.757336   20909 start_flags.go:323] config:
	{Name:download-only-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-301000 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:03:09.757477   20909 iso.go:125] acquiring lock: {Name:mk4b58cf2276bb45b0aa3c6bb84562661ef8327d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 11:03:09.778918   20909 out.go:97] Starting control plane node download-only-301000 in cluster download-only-301000
	I1219 11:03:09.778950   20909 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1219 11:03:09.833933   20909 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1219 11:03:09.833981   20909 cache.go:56] Caching tarball of preloaded images
	I1219 11:03:09.834342   20909 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1219 11:03:09.855972   20909 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I1219 11:03:09.856023   20909 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:03:09.939247   20909 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:74b99cd9fa76659778caad266ad399ba -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I1219 11:03:17.883139   20909 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:03:17.883364   20909 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I1219 11:03:18.424936   20909 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I1219 11:03:18.425029   20909 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/download-only-301000/config.json ...
	I1219 11:03:18.425412   20909 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I1219 11:03:18.425641   20909 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17837-20429/.minikube/cache/darwin/amd64/v1.29.0-rc.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-301000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-301000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-735000 --alsologtostderr --binary-mirror http://127.0.0.1:55376 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-735000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-735000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-233000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-233000: exit status 85 (209.655997ms)

                                                
                                                
-- stdout --
	* Profile "addons-233000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-233000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-233000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-233000: exit status 85 (189.072976ms)

                                                
                                                
-- stdout --
	* Profile "addons-233000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-233000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (199.28s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-233000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-233000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m19.282469941s)
--- PASS: TestAddons/Setup (199.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 8.811298ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7ksqc" [d53da532-4feb-46d8-aeb2-75f0e83f7c00] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002888032s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fdvtj" [52d53384-7a8b-4089-9930-7ee7211cc4e7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004551164s
addons_test.go:339: (dbg) Run:  kubectl --context addons-233000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-233000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-233000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.027760719s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 ip
2023/12/19 11:07:05 [DEBUG] GET http://192.168.169.3:5000
addons_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-233000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-233000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-233000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [300a385e-df01-42e7-b80a-bbdc73e96b10] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [300a385e-df01-42e7-b80a-bbdc73e96b10] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003274574s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-233000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.169.3
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p addons-233000 addons disable ingress --alsologtostderr -v=1: (7.430060457s)
--- PASS: TestAddons/parallel/Ingress (20.95s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6cdrd" [42218676-43ea-4f57-b5b7-7888aa6992f6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002942309s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-233000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-233000: (5.466337249s)
--- PASS: TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 1.962746ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-f6wj5" [e08cd113-ab6f-4673-9789-85bb0e45edeb] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006293794s
addons_test.go:414: (dbg) Run:  kubectl --context addons-233000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 2.89615ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-f74bw" [5d408158-3d8f-4bf3-b11e-4f1c0218f87f] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003533372s
addons_test.go:472: (dbg) Run:  kubectl --context addons-233000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-233000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.872216349s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 13.546764ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-233000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-233000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ebbc85d1-821c-4138-8221-9d0bf2f007bc] Pending
helpers_test.go:344: "task-pv-pod" [ebbc85d1-821c-4138-8221-9d0bf2f007bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ebbc85d1-821c-4138-8221-9d0bf2f007bc] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004256821s
addons_test.go:583: (dbg) Run:  kubectl --context addons-233000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-233000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-233000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-233000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-233000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-233000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-233000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [80bda58e-a0a6-4ccb-a0ce-140bdc9de7e6] Pending
helpers_test.go:344: "task-pv-pod-restore" [80bda58e-a0a6-4ccb-a0ce-140bdc9de7e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [80bda58e-a0a6-4ccb-a0ce-140bdc9de7e6] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00523166s
addons_test.go:625: (dbg) Run:  kubectl --context addons-233000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-233000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-233000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-233000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.482245237s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-233000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-233000 --alsologtostderr -v=1: (1.193514739s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-8g8q8" [8c2d0ad1-c5f4-4d1d-8422-52fbdcf3deb0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-8g8q8" [8c2d0ad1-c5f4-4d1d-8422-52fbdcf3deb0] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.002264019s
--- PASS: TestAddons/parallel/Headlamp (14.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-4x7zd" [7a946700-8986-4264-b2ea-00f199c2dcd6] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004206771s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-233000
--- PASS: TestAddons/parallel/CloudSpanner (6.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-233000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-233000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1545e548-8acf-4307-ae25-22e7bc87e9f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1545e548-8acf-4307-ae25-22e7bc87e9f2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1545e548-8acf-4307-ae25-22e7bc87e9f2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004364527s
addons_test.go:890: (dbg) Run:  kubectl --context addons-233000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 ssh "cat /opt/local-path-provisioner/pvc-9d3fa3ee-ac27-461f-a999-ad6685b5fd89_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-233000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-233000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-233000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-amd64 -p addons-233000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.738435179s)
--- PASS: TestAddons/parallel/LocalPath (53.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b5v72" [15dc85c5-f28d-4b8f-bdc8-390ab83e42eb] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005784747s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-233000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-233000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-233000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-233000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-233000: (5.218690273s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-233000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-233000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-233000
--- PASS: TestAddons/StoppedEnableDisable (5.75s)

                                                
                                    
x
+
TestCertOptions (37.83s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-731000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-731000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.067234953s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-731000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-731000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-731000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-731000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-731000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-731000: (3.417087626s)
--- PASS: TestCertOptions (37.83s)

                                                
                                    
x
+
TestCertExpiration (241.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (34.538832038s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1219 11:41:09.443502   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-027000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (21.749540264s)
helpers_test.go:175: Cleaning up "cert-expiration-027000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-027000
E1219 11:41:29.923887   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-027000: (5.283176249s)
--- PASS: TestCertExpiration (241.57s)

                                                
                                    
x
+
TestDockerFlags (38.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-834000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-834000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (34.557284098s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-834000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-834000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-834000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-834000: (3.440170684s)
--- PASS: TestDockerFlags (38.33s)

                                                
                                    
x
+
TestForceSystemdFlag (52.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-138000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E1219 11:36:46.981689   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:36:57.744634   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:37:10.727524   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-138000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (46.707793686s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-138000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-138000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-138000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-138000: (5.284811122s)
--- PASS: TestForceSystemdFlag (52.18s)

                                                
                                    
x
+
TestForceSystemdEnv (45.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-675000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-675000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (41.762945335s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-675000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-675000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-675000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-675000: (3.587924794s)
--- PASS: TestForceSystemdEnv (45.53s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.35s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (11.35s)

                                                
                                    
x
+
TestErrorSpam/setup (34.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-509000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-509000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 --driver=hyperkit : (34.729010909s)
--- PASS: TestErrorSpam/setup (34.73s)

                                                
                                    
x
+
TestErrorSpam/start (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 start --dry-run
--- PASS: TestErrorSpam/start (1.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.48s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 status
--- PASS: TestErrorSpam/status (0.48s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 unpause
--- PASS: TestErrorSpam/unpause (1.25s)

                                                
                                    
x
+
TestErrorSpam/stop (3.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 stop: (3.244068064s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-509000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-509000 stop
--- PASS: TestErrorSpam/stop (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /Users/jenkins/minikube-integration/17837-20429/.minikube/files/etc/test/nested/copy/20867/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2233: (dbg) Done: out/minikube-darwin-amd64 start -p functional-795000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (50.395213777s)
--- PASS: TestFunctional/serial/StartWithProxy (50.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.47s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-795000 --alsologtostderr -v=8: (39.466382302s)
functional_test.go:659: soft start took 39.466870809s for "functional-795000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.47s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-795000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:3.1: (3.204063175s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:3.3: (3.213333755s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 cache add registry.k8s.io/pause:latest: (2.296180459s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local661324151/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache add minikube-local-cache-test:functional-795000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache delete minikube-local-cache-test:functional-795000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-795000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (157.713577ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 cache reload: (1.864052166s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 kubectl -- --context functional-795000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-795000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.87s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 11:11:46.980078   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:46.986180   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:46.997851   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:47.018524   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:47.059651   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:47.140986   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:47.301702   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:47.621967   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:48.262464   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:49.544042   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:52.104274   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:11:57.225725   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-795000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.87383401s)
functional_test.go:757: restart took 36.874006671s for "functional-795000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.87s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-795000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 logs: (2.683171264s)
--- PASS: TestFunctional/serial/LogsCmd (2.68s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4272406564/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd4272406564/001/logs.txt: (2.880591681s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-795000 apply -f testdata/invalidsvc.yaml
E1219 11:12:07.466555   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
functional_test.go:2334: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-795000
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-795000: exit status 115 (279.518373ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.170.3:30476 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-795000 delete -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Done: kubectl --context functional-795000 delete -f testdata/invalidsvc.yaml: (1.405424129s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 config get cpus: exit status 14 (72.966437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 config get cpus: exit status 14 (56.097746ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-795000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-795000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21906: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-795000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (513.796749ms)

                                                
                                                
-- stdout --
	* [functional-795000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:12:50.022662   21858 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:12:50.022866   21858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:12:50.022871   21858 out.go:309] Setting ErrFile to fd 2...
	I1219 11:12:50.022875   21858 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:12:50.023070   21858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:12:50.024453   21858 out.go:303] Setting JSON to false
	I1219 11:12:50.046664   21858 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6140,"bootTime":1703007030,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:12:50.046763   21858 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:12:50.069159   21858 out.go:177] * [functional-795000] minikube v1.32.0 on Darwin 14.2
	I1219 11:12:50.112867   21858 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:12:50.112931   21858 notify.go:220] Checking for updates...
	I1219 11:12:50.156782   21858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:12:50.183936   21858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:12:50.204513   21858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:12:50.225540   21858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:12:50.246356   21858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:12:50.268253   21858 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:12:50.268914   21858 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:12:50.268987   21858 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:12:50.277941   21858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56289
	I1219 11:12:50.278312   21858 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:12:50.278721   21858 main.go:141] libmachine: Using API Version  1
	I1219 11:12:50.278731   21858 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:12:50.278936   21858 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:12:50.279032   21858 main.go:141] libmachine: (functional-795000) Calling .DriverName
	I1219 11:12:50.279209   21858 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:12:50.279444   21858 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:12:50.279468   21858 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:12:50.287532   21858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56291
	I1219 11:12:50.287894   21858 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:12:50.288265   21858 main.go:141] libmachine: Using API Version  1
	I1219 11:12:50.288280   21858 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:12:50.288520   21858 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:12:50.288642   21858 main.go:141] libmachine: (functional-795000) Calling .DriverName
	I1219 11:12:50.317463   21858 out.go:177] * Using the hyperkit driver based on existing profile
	I1219 11:12:50.359366   21858 start.go:298] selected driver: hyperkit
	I1219 11:12:50.359383   21858 start.go:902] validating driver "hyperkit" against &{Name:functional-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-795000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.170.3 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:12:50.359528   21858 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:12:50.383387   21858 out.go:177] 
	W1219 11:12:50.404378   21858 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 11:12:50.441499   21858 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-795000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-795000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (733.746187ms)

                                                
                                                
-- stdout --
	* [functional-795000] minikube v1.32.0 sur Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:12:49.283594   21851 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:12:49.283874   21851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:12:49.283880   21851 out.go:309] Setting ErrFile to fd 2...
	I1219 11:12:49.283898   21851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:12:49.284130   21851 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:12:49.285715   21851 out.go:303] Setting JSON to false
	I1219 11:12:49.310187   21851 start.go:128] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6139,"bootTime":1703007030,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.2","kernelVersion":"23.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W1219 11:12:49.310278   21851 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1219 11:12:49.332065   21851 out.go:177] * [functional-795000] minikube v1.32.0 sur Darwin 14.2
	I1219 11:12:49.443700   21851 out.go:177]   - MINIKUBE_LOCATION=17837
	I1219 11:12:49.406028   21851 notify.go:220] Checking for updates...
	I1219 11:12:49.518785   21851 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	I1219 11:12:49.560778   21851 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1219 11:12:49.618720   21851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 11:12:49.677031   21851 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	I1219 11:12:49.735799   21851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 11:12:49.757343   21851 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:12:49.757726   21851 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:12:49.757768   21851 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:12:49.766343   21851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56284
	I1219 11:12:49.766711   21851 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:12:49.767129   21851 main.go:141] libmachine: Using API Version  1
	I1219 11:12:49.767140   21851 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:12:49.767381   21851 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:12:49.767491   21851 main.go:141] libmachine: (functional-795000) Calling .DriverName
	I1219 11:12:49.767683   21851 driver.go:392] Setting default libvirt URI to qemu:///system
	I1219 11:12:49.767944   21851 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:12:49.767970   21851 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:12:49.775929   21851 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56286
	I1219 11:12:49.776257   21851 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:12:49.776588   21851 main.go:141] libmachine: Using API Version  1
	I1219 11:12:49.776602   21851 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:12:49.776817   21851 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:12:49.776934   21851 main.go:141] libmachine: (functional-795000) Calling .DriverName
	I1219 11:12:49.821707   21851 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1219 11:12:49.858083   21851 start.go:298] selected driver: hyperkit
	I1219 11:12:49.858109   21851 start.go:902] validating driver "hyperkit" against &{Name:functional-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17806/minikube-v1.32.1-1702708929-17806-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1702920864-17822@sha256:4842b362f06b33d847d73f7ed166c93ce608f4c4cea49b711c7055fd50ebd1e0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-795000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.170.3 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1219 11:12:49.858330   21851 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 11:12:49.883940   21851 out.go:177] 
	W1219 11:12:49.904690   21851 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 11:12:49.947037   21851 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (17.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-795000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-795000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-wzx76" [c6bdf86f-91bb-4dcd-9e51-9f48282b6c7e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-wzx76" [c6bdf86f-91bb-4dcd-9e51-9f48282b6c7e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 17.00514028s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.170.3:30872
functional_test.go:1674: http://192.168.170.3:30872: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-wzx76

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.170.3:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.170.3:30872
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (17.36s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63b8bb35-bfa2-4750-b5aa-613b5103b127] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003997795s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-795000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-795000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-795000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-795000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7835b619-057d-4545-bedf-dde78e0c7f9a] Pending
helpers_test.go:344: "sp-pod" [7835b619-057d-4545-bedf-dde78e0c7f9a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7835b619-057d-4545-bedf-dde78e0c7f9a] Running
E1219 11:12:27.947429   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00287491s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-795000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-795000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-795000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e632da1-47f3-406a-9606-235e2bee6fc7] Pending
helpers_test.go:344: "sp-pod" [0e632da1-47f3-406a-9606-235e2bee6fc7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e632da1-47f3-406a-9606-235e2bee6fc7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004767789s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-795000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh -n functional-795000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cp functional-795000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd2953751617/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh -n functional-795000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh -n functional-795000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-795000 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-kxwml" [759a90f9-dc6e-41a0-b342-2b65453ee4ba] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-kxwml" [759a90f9-dc6e-41a0-b342-2b65453ee4ba] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003920168s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-795000 exec mysql-859648c796-kxwml -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-795000 exec mysql-859648c796-kxwml -- mysql -ppassword -e "show databases;": exit status 1 (127.797281ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-795000 exec mysql-859648c796-kxwml -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-795000 exec mysql-859648c796-kxwml -- mysql -ppassword -e "show databases;": exit status 1 (107.197827ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-795000 exec mysql-859648c796-kxwml -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/20867/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /etc/test/nested/copy/20867/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/20867.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /etc/ssl/certs/20867.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/20867.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /usr/share/ca-certificates/20867.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/208672.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /etc/ssl/certs/208672.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/208672.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /usr/share/ca-certificates/208672.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-795000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh "sudo systemctl is-active crio": exit status 1 (205.150106ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 21677: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-795000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a57fe0e5-a3da-43ad-99fb-e4c424db33b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a57fe0e5-a3da-43ad-99fb-e4c424db33b5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004044016s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-795000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.8.120 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-795000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-795000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-795000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-v5556" [3da1b814-3d83-4fa4-a9d1-a98af308a1ac] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-v5556" [3da1b814-3d83-4fa4-a9d1-a98af308a1ac] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004407372s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "207.064916ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "81.72439ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "217.997006ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "79.103814ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port327256451/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1703013161204360000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port327256451/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1703013161204360000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port327256451/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1703013161204360000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port327256451/001/test-1703013161204360000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (159.533778ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 19:12 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 19:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 19:12 test-1703013161204360000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh cat /mount-9p/test-1703013161204360000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-795000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cf2ab8a3-71f4-4b7e-b31c-fc9340749da3] Pending
helpers_test.go:344: "busybox-mount" [cf2ab8a3-71f4-4b7e-b31c-fc9340749da3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cf2ab8a3-71f4-4b7e-b31c-fc9340749da3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cf2ab8a3-71f4-4b7e-b31c-fc9340749da3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.006110233s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-795000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port327256451/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service list -o json
functional_test.go:1493: Took "373.434647ms" to run "out/minikube-darwin-amd64 -p functional-795000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.170.3:31160
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.170.3:31160
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port749094304/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (188.701181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port749094304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh "sudo umount -f /mount-9p": exit status 1 (137.087346ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-795000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port749094304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T" /mount1: exit status 1 (161.119272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-795000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-795000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1909431490/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-795000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-795000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-795000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-795000 image ls --format short --alsologtostderr:
I1219 11:13:19.056650   22160 out.go:296] Setting OutFile to fd 1 ...
I1219 11:13:19.067927   22160 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.067943   22160 out.go:309] Setting ErrFile to fd 2...
I1219 11:13:19.067952   22160 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.068346   22160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
I1219 11:13:19.069473   22160 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.069689   22160 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.070299   22160 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.070356   22160 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.078586   22160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56620
I1219 11:13:19.079040   22160 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.079464   22160 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.079475   22160 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.079686   22160 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.079789   22160 main.go:141] libmachine: (functional-795000) Calling .GetState
I1219 11:13:19.079875   22160 main.go:141] libmachine: (functional-795000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1219 11:13:19.079944   22160 main.go:141] libmachine: (functional-795000) DBG | hyperkit pid from json: 21445
I1219 11:13:19.081202   22160 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.081222   22160 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.089137   22160 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56622
I1219 11:13:19.089516   22160 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.089897   22160 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.089917   22160 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.090118   22160 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.090224   22160 main.go:141] libmachine: (functional-795000) Calling .DriverName
I1219 11:13:19.090380   22160 ssh_runner.go:195] Run: systemctl --version
I1219 11:13:19.090403   22160 main.go:141] libmachine: (functional-795000) Calling .GetSSHHostname
I1219 11:13:19.090482   22160 main.go:141] libmachine: (functional-795000) Calling .GetSSHPort
I1219 11:13:19.090570   22160 main.go:141] libmachine: (functional-795000) Calling .GetSSHKeyPath
I1219 11:13:19.090668   22160 main.go:141] libmachine: (functional-795000) Calling .GetSSHUsername
I1219 11:13:19.090746   22160 sshutil.go:53] new ssh client: &{IP:192.168.170.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/functional-795000/id_rsa Username:docker}
I1219 11:13:19.131337   22160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1219 11:13:19.158756   22160 main.go:141] libmachine: Making call to close driver server
I1219 11:13:19.158766   22160 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:19.159014   22160 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:19.159019   22160 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:19.159029   22160 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:19.159040   22160 main.go:141] libmachine: Making call to close driver server
I1219 11:13:19.159048   22160 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:19.159188   22160 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:19.159198   22160 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:19.159198   22160 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-795000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | 2a36393edaf1b | 187MB  |
| docker.io/library/nginx                     | alpine            | 01e5c69afaf63 | 42.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-795000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-795000 | a15295e46c78a | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-795000 | 7d37bf58145bc | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-795000 image ls --format table --alsologtostderr:
I1219 11:13:25.472055   22185 out.go:296] Setting OutFile to fd 1 ...
I1219 11:13:25.472366   22185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:25.472372   22185 out.go:309] Setting ErrFile to fd 2...
I1219 11:13:25.472376   22185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:25.472569   22185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
I1219 11:13:25.473225   22185 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:25.473338   22185 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:25.473677   22185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:25.473730   22185 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:25.481741   22185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56652
I1219 11:13:25.482126   22185 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:25.482542   22185 main.go:141] libmachine: Using API Version  1
I1219 11:13:25.482554   22185 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:25.482765   22185 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:25.482864   22185 main.go:141] libmachine: (functional-795000) Calling .GetState
I1219 11:13:25.482954   22185 main.go:141] libmachine: (functional-795000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1219 11:13:25.483029   22185 main.go:141] libmachine: (functional-795000) DBG | hyperkit pid from json: 21445
I1219 11:13:25.484297   22185 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:25.484318   22185 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:25.492288   22185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56654
I1219 11:13:25.492656   22185 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:25.492965   22185 main.go:141] libmachine: Using API Version  1
I1219 11:13:25.492991   22185 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:25.493232   22185 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:25.493342   22185 main.go:141] libmachine: (functional-795000) Calling .DriverName
I1219 11:13:25.493488   22185 ssh_runner.go:195] Run: systemctl --version
I1219 11:13:25.493508   22185 main.go:141] libmachine: (functional-795000) Calling .GetSSHHostname
I1219 11:13:25.493582   22185 main.go:141] libmachine: (functional-795000) Calling .GetSSHPort
I1219 11:13:25.493680   22185 main.go:141] libmachine: (functional-795000) Calling .GetSSHKeyPath
I1219 11:13:25.493794   22185 main.go:141] libmachine: (functional-795000) Calling .GetSSHUsername
I1219 11:13:25.493872   22185 sshutil.go:53] new ssh client: &{IP:192.168.170.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/functional-795000/id_rsa Username:docker}
I1219 11:13:25.533895   22185 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1219 11:13:25.552910   22185 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.552978   22185 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.553271   22185 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.553280   22185 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:25.553288   22185 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.553310   22185 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.553556   22185 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:25.553577   22185 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.553588   22185 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-795000 image ls --format json --alsologtostderr:
[{"id":"2a36393edaf1bcdb9d44bf9ed187b6ff6945b94eb369155d98e02d000609be05","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests"
:[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"7d37bf58145bcf5e47c4e335428c4bdf808712c8fa637810ca4dbfaa84dfd104","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-795000"],"size":"1240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-res
izer:functional-795000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"a
15295e46c78ab13edae528c728351387528e5af2a8754c03f1d89234f748c95","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-795000"],"size":"30"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-795000 image ls --format json --alsologtostderr:
I1219 11:13:25.295101   22181 out.go:296] Setting OutFile to fd 1 ...
I1219 11:13:25.295368   22181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:25.295374   22181 out.go:309] Setting ErrFile to fd 2...
I1219 11:13:25.295378   22181 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:25.295586   22181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
I1219 11:13:25.296252   22181 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:25.296357   22181 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:25.296714   22181 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:25.296761   22181 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:25.304660   22181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56647
I1219 11:13:25.305093   22181 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:25.305510   22181 main.go:141] libmachine: Using API Version  1
I1219 11:13:25.305535   22181 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:25.305767   22181 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:25.305885   22181 main.go:141] libmachine: (functional-795000) Calling .GetState
I1219 11:13:25.305972   22181 main.go:141] libmachine: (functional-795000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1219 11:13:25.306039   22181 main.go:141] libmachine: (functional-795000) DBG | hyperkit pid from json: 21445
I1219 11:13:25.307336   22181 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:25.307364   22181 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:25.316001   22181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56649
I1219 11:13:25.316635   22181 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:25.317173   22181 main.go:141] libmachine: Using API Version  1
I1219 11:13:25.317183   22181 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:25.317550   22181 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:25.317680   22181 main.go:141] libmachine: (functional-795000) Calling .DriverName
I1219 11:13:25.317839   22181 ssh_runner.go:195] Run: systemctl --version
I1219 11:13:25.317861   22181 main.go:141] libmachine: (functional-795000) Calling .GetSSHHostname
I1219 11:13:25.317950   22181 main.go:141] libmachine: (functional-795000) Calling .GetSSHPort
I1219 11:13:25.318031   22181 main.go:141] libmachine: (functional-795000) Calling .GetSSHKeyPath
I1219 11:13:25.318119   22181 main.go:141] libmachine: (functional-795000) Calling .GetSSHUsername
I1219 11:13:25.318242   22181 sshutil.go:53] new ssh client: &{IP:192.168.170.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/functional-795000/id_rsa Username:docker}
I1219 11:13:25.366055   22181 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1219 11:13:25.386646   22181 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.386656   22181 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.386807   22181 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.386816   22181 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:25.386823   22181 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.386828   22181 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.386832   22181 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:25.386968   22181 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:25.386976   22181 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.386990   22181 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-795000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2a36393edaf1bcdb9d44bf9ed187b6ff6945b94eb369155d98e02d000609be05
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 01e5c69afaf635f66aab0b59404a0ac72db1e2e519c3f41a1ff53d37c35bba41
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-795000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: a15295e46c78ab13edae528c728351387528e5af2a8754c03f1d89234f748c95
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-795000
size: "30"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-795000 image ls --format yaml --alsologtostderr:
I1219 11:13:19.244648   22164 out.go:296] Setting OutFile to fd 1 ...
I1219 11:13:19.244976   22164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.244982   22164 out.go:309] Setting ErrFile to fd 2...
I1219 11:13:19.244987   22164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.245201   22164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
I1219 11:13:19.245876   22164 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.245978   22164 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.246445   22164 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.246501   22164 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.254789   22164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56625
I1219 11:13:19.255280   22164 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.255783   22164 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.255816   22164 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.256044   22164 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.256154   22164 main.go:141] libmachine: (functional-795000) Calling .GetState
I1219 11:13:19.256244   22164 main.go:141] libmachine: (functional-795000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1219 11:13:19.256313   22164 main.go:141] libmachine: (functional-795000) DBG | hyperkit pid from json: 21445
I1219 11:13:19.257626   22164 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.257647   22164 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.265842   22164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56627
I1219 11:13:19.266251   22164 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.266587   22164 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.266597   22164 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.266831   22164 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.266937   22164 main.go:141] libmachine: (functional-795000) Calling .DriverName
I1219 11:13:19.267093   22164 ssh_runner.go:195] Run: systemctl --version
I1219 11:13:19.267122   22164 main.go:141] libmachine: (functional-795000) Calling .GetSSHHostname
I1219 11:13:19.267206   22164 main.go:141] libmachine: (functional-795000) Calling .GetSSHPort
I1219 11:13:19.267289   22164 main.go:141] libmachine: (functional-795000) Calling .GetSSHKeyPath
I1219 11:13:19.267378   22164 main.go:141] libmachine: (functional-795000) Calling .GetSSHUsername
I1219 11:13:19.267450   22164 sshutil.go:53] new ssh client: &{IP:192.168.170.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/functional-795000/id_rsa Username:docker}
I1219 11:13:19.313468   22164 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1219 11:13:19.340184   22164 main.go:141] libmachine: Making call to close driver server
I1219 11:13:19.340195   22164 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:19.340357   22164 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:19.340360   22164 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:19.340369   22164 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:19.340377   22164 main.go:141] libmachine: Making call to close driver server
I1219 11:13:19.340384   22164 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:19.340522   22164 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:19.340529   22164 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:19.340542   22164 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-795000 ssh pgrep buildkitd: exit status 1 (146.538693ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image build -t localhost/my-image:functional-795000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image build -t localhost/my-image:functional-795000 testdata/build --alsologtostderr: (5.519614994s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-795000 image build -t localhost/my-image:functional-795000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 6791077844f5
Removing intermediate container 6791077844f5
---> 0cb6c27b08ef
Step 3/3 : ADD content.txt /
---> 7d37bf58145b
Successfully built 7d37bf58145b
Successfully tagged localhost/my-image:functional-795000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-795000 image build -t localhost/my-image:functional-795000 testdata/build --alsologtostderr:
I1219 11:13:19.571986   22173 out.go:296] Setting OutFile to fd 1 ...
I1219 11:13:19.572240   22173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.572247   22173 out.go:309] Setting ErrFile to fd 2...
I1219 11:13:19.572251   22173 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1219 11:13:19.572444   22173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
I1219 11:13:19.573080   22173 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.573709   22173 config.go:182] Loaded profile config "functional-795000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1219 11:13:19.574083   22173 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.574125   22173 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.583132   22173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56637
I1219 11:13:19.583666   22173 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.584184   22173 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.584204   22173 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.584467   22173 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.584592   22173 main.go:141] libmachine: (functional-795000) Calling .GetState
I1219 11:13:19.584697   22173 main.go:141] libmachine: (functional-795000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1219 11:13:19.584812   22173 main.go:141] libmachine: (functional-795000) DBG | hyperkit pid from json: 21445
I1219 11:13:19.586199   22173 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1219 11:13:19.586224   22173 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1219 11:13:19.595079   22173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56639
I1219 11:13:19.595560   22173 main.go:141] libmachine: () Calling .GetVersion
I1219 11:13:19.596077   22173 main.go:141] libmachine: Using API Version  1
I1219 11:13:19.596100   22173 main.go:141] libmachine: () Calling .SetConfigRaw
I1219 11:13:19.596316   22173 main.go:141] libmachine: () Calling .GetMachineName
I1219 11:13:19.596421   22173 main.go:141] libmachine: (functional-795000) Calling .DriverName
I1219 11:13:19.596582   22173 ssh_runner.go:195] Run: systemctl --version
I1219 11:13:19.596603   22173 main.go:141] libmachine: (functional-795000) Calling .GetSSHHostname
I1219 11:13:19.596686   22173 main.go:141] libmachine: (functional-795000) Calling .GetSSHPort
I1219 11:13:19.596801   22173 main.go:141] libmachine: (functional-795000) Calling .GetSSHKeyPath
I1219 11:13:19.596883   22173 main.go:141] libmachine: (functional-795000) Calling .GetSSHUsername
I1219 11:13:19.596977   22173 sshutil.go:53] new ssh client: &{IP:192.168.170.3 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/functional-795000/id_rsa Username:docker}
I1219 11:13:19.642404   22173 build_images.go:151] Building image from path: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2337225425.tar
I1219 11:13:19.642497   22173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 11:13:19.649967   22173 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2337225425.tar
I1219 11:13:19.654346   22173 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2337225425.tar: stat -c "%s %y" /var/lib/minikube/build/build.2337225425.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2337225425.tar': No such file or directory
I1219 11:13:19.654375   22173 ssh_runner.go:362] scp /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2337225425.tar --> /var/lib/minikube/build/build.2337225425.tar (3072 bytes)
I1219 11:13:19.681071   22173 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2337225425
I1219 11:13:19.687428   22173 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2337225425 -xf /var/lib/minikube/build/build.2337225425.tar
I1219 11:13:19.697009   22173 docker.go:346] Building image: /var/lib/minikube/build/build.2337225425
I1219 11:13:19.697087   22173 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-795000 /var/lib/minikube/build/build.2337225425
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1219 11:13:24.988862   22173 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-795000 /var/lib/minikube/build/build.2337225425: (5.291665127s)
I1219 11:13:24.989014   22173 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2337225425
I1219 11:13:24.995972   22173 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2337225425.tar
I1219 11:13:25.002649   22173 build_images.go:207] Built localhost/my-image:functional-795000 from /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/build.2337225425.tar
I1219 11:13:25.002671   22173 build_images.go:123] succeeded building to: functional-795000
I1219 11:13:25.002676   22173 build_images.go:124] failed building to: 
I1219 11:13:25.002718   22173 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.002727   22173 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.002881   22173 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.002893   22173 main.go:141] libmachine: Making call to close connection to plugin binary
I1219 11:13:25.002902   22173 main.go:141] libmachine: Making call to close driver server
I1219 11:13:25.002911   22173 main.go:141] libmachine: (functional-795000) Calling .Close
I1219 11:13:25.002912   22173 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:25.003051   22173 main.go:141] libmachine: Successfully made call to close driver server
I1219 11:13:25.003060   22173 main.go:141] libmachine: (functional-795000) DBG | Closing plugin on server side
I1219 11:13:25.003062   22173 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.54846224s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-795000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr: (3.269374461s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr: (1.859539772s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/12/19 11:13:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (4.340279723s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-795000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image load --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr: (2.735770856s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-795000 docker-env) && out/minikube-darwin-amd64 status -p functional-795000"
E1219 11:13:08.908293   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-795000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image save gcr.io/google-containers/addon-resizer:functional-795000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image save gcr.io/google-containers/addon-resizer:functional-795000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.767019095s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image rm gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.368280399s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-795000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-795000 image save --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-795000 image save --daemon gcr.io/google-containers/addon-resizer:functional-795000 --alsologtostderr: (1.003455483s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-795000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-795000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-795000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-795000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-943000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E1219 11:14:30.831302   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-943000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m13.891180024s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons enable ingress --alsologtostderr -v=5: (19.243199385s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (19.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-943000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-943000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.165758415s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-943000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-943000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b75ca382-d703-4eac-9f6b-65bae185c23e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b75ca382-d703-4eac-9f6b-65bae185c23e] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003886555s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-943000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.170.5
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons disable ingress-dns --alsologtostderr -v=1: (2.528093065s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-943000 addons disable ingress --alsologtostderr -v=1: (7.27242368s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-159000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1219 11:16:46.982696   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-159000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (50.701260641s)
--- PASS: TestJSONOutput/start/Command (50.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-159000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-159000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-159000 --output=json --user=testUser
E1219 11:17:10.729819   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:10.735080   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:10.746835   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:10.767660   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:10.808180   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:10.889424   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:11.049726   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:11.370256   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:12.011788   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:13.292461   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-159000 --output=json --user=testUser: (8.175111885s)
--- PASS: TestJSONOutput/stop/Command (8.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.79s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-835000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-835000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (403.828258ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e49aa85-d27c-4d51-b1bd-d135f5527dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-835000] minikube v1.32.0 on Darwin 14.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13bcbd10-a30e-4fec-8f75-5c60824261cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17837"}}
	{"specversion":"1.0","id":"de62c705-ba3f-40d1-ad60-4fbb28719057","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig"}}
	{"specversion":"1.0","id":"e299d9b5-e1ee-4428-ba95-b45fb8790089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"2105d8a8-e357-4cb2-a5dd-a5b19747f31c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66c95db8-3f1f-4124-9ce0-25f860aa2903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube"}}
	{"specversion":"1.0","id":"caa4f957-1158-4121-81ac-df57f4f6ef31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bed09e30-e132-4d3d-920a-c3c552fef319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-835000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-835000
--- PASS: TestErrorJSONOutput (0.79s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
E1219 11:17:15.853098   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (83.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-832000 --driver=hyperkit 
E1219 11:17:20.974668   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:17:31.215865   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-832000 --driver=hyperkit : (35.693913247s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-835000 --driver=hyperkit 
E1219 11:17:51.696860   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-835000 --driver=hyperkit : (35.937458305s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-832000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-835000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-835000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-835000
E1219 11:18:32.659196   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-835000: (5.272839517s)
helpers_test.go:175: Cleaning up "first-832000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-832000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-832000: (5.318417664s)
--- PASS: TestMinikubeProfile (83.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-260000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-260000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.899089521s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-260000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-260000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-276000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-276000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.03219239s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.36s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-260000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-260000 --alsologtostderr -v=5: (2.361016709s)
--- PASS: TestMountStart/serial/DeleteFirst (2.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-276000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-276000: (2.242851414s)
--- PASS: TestMountStart/serial/Stop (2.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.72s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-276000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-276000: (15.718048886s)
--- PASS: TestMountStart/serial/RestartStopped (16.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-276000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (160.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-783000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1219 11:19:54.581558   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:20:34.678571   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:34.683925   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:34.695471   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:34.716383   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:34.756494   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:34.838552   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:35.000643   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:35.321742   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:35.963270   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:37.243916   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:39.804519   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:44.924785   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:20:55.164140   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:21:15.644945   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:21:46.965701   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:21:56.605679   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:22:10.710181   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-783000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m39.814994159s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (160.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-783000 -- rollout status deployment/busybox: (6.283818421s)
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-6gqfv -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-mqhj4 -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-6gqfv -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-mqhj4 -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-6gqfv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-mqhj4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-6gqfv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-6gqfv -- sh -c "ping -c 1 192.168.170.1"
multinode_test.go:588: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-mqhj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-783000 -- exec busybox-5bc68d56bd-mqhj4 -- sh -c "ping -c 1 192.168.170.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-783000 -v 3 --alsologtostderr
E1219 11:22:38.402383   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-783000 -v 3 --alsologtostderr: (37.245075081s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-783000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp testdata/cp-test.txt multinode-783000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile522806871/001/cp-test_multinode-783000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000:/home/docker/cp-test.txt multinode-783000-m02:/home/docker/cp-test_multinode-783000_multinode-783000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test_multinode-783000_multinode-783000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000:/home/docker/cp-test.txt multinode-783000-m03:/home/docker/cp-test_multinode-783000_multinode-783000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test_multinode-783000_multinode-783000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp testdata/cp-test.txt multinode-783000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile522806871/001/cp-test_multinode-783000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m02:/home/docker/cp-test.txt multinode-783000:/home/docker/cp-test_multinode-783000-m02_multinode-783000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test_multinode-783000-m02_multinode-783000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m02:/home/docker/cp-test.txt multinode-783000-m03:/home/docker/cp-test_multinode-783000-m02_multinode-783000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test_multinode-783000-m02_multinode-783000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp testdata/cp-test.txt multinode-783000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile522806871/001/cp-test_multinode-783000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m03:/home/docker/cp-test.txt multinode-783000:/home/docker/cp-test_multinode-783000-m03_multinode-783000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000 "sudo cat /home/docker/cp-test_multinode-783000-m03_multinode-783000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 cp multinode-783000-m03:/home/docker/cp-test.txt multinode-783000-m02:/home/docker/cp-test_multinode-783000-m03_multinode-783000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 ssh -n multinode-783000-m02 "sudo cat /home/docker/cp-test_multinode-783000-m03_multinode-783000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-darwin-amd64 -p multinode-783000 node stop m03: (2.211349251s)
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-783000 status: exit status 7 (266.245609ms)

                                                
                                                
-- stdout --
	multinode-783000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-783000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-783000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr: exit status 7 (260.277627ms)

                                                
                                                
-- stdout --
	multinode-783000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-783000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-783000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:23:11.247965   22948 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:23:11.248255   22948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:23:11.248261   22948 out.go:309] Setting ErrFile to fd 2...
	I1219 11:23:11.248265   22948 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:23:11.248480   22948 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:23:11.248665   22948 out.go:303] Setting JSON to false
	I1219 11:23:11.248693   22948 mustload.go:65] Loading cluster: multinode-783000
	I1219 11:23:11.248747   22948 notify.go:220] Checking for updates...
	I1219 11:23:11.249031   22948 config.go:182] Loaded profile config "multinode-783000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:23:11.249046   22948 status.go:255] checking status of multinode-783000 ...
	I1219 11:23:11.249434   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.249480   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.257685   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57587
	I1219 11:23:11.258057   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.258457   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.258466   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.258678   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.258785   22948 main.go:141] libmachine: (multinode-783000) Calling .GetState
	I1219 11:23:11.258875   22948 main.go:141] libmachine: (multinode-783000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:23:11.258943   22948 main.go:141] libmachine: (multinode-783000) DBG | hyperkit pid from json: 22650
	I1219 11:23:11.260171   22948 status.go:330] multinode-783000 host status = "Running" (err=<nil>)
	I1219 11:23:11.260188   22948 host.go:66] Checking if "multinode-783000" exists ...
	I1219 11:23:11.260420   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.260451   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.268180   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57589
	I1219 11:23:11.268537   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.268843   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.268859   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.269107   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.269214   22948 main.go:141] libmachine: (multinode-783000) Calling .GetIP
	I1219 11:23:11.269300   22948 host.go:66] Checking if "multinode-783000" exists ...
	I1219 11:23:11.269557   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.269585   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.277709   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57591
	I1219 11:23:11.278049   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.278375   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.278385   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.278569   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.278672   22948 main.go:141] libmachine: (multinode-783000) Calling .DriverName
	I1219 11:23:11.278818   22948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 11:23:11.278842   22948 main.go:141] libmachine: (multinode-783000) Calling .GetSSHHostname
	I1219 11:23:11.278918   22948 main.go:141] libmachine: (multinode-783000) Calling .GetSSHPort
	I1219 11:23:11.279000   22948 main.go:141] libmachine: (multinode-783000) Calling .GetSSHKeyPath
	I1219 11:23:11.279080   22948 main.go:141] libmachine: (multinode-783000) Calling .GetSSHUsername
	I1219 11:23:11.279162   22948 sshutil.go:53] new ssh client: &{IP:192.168.170.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/multinode-783000/id_rsa Username:docker}
	I1219 11:23:11.323330   22948 ssh_runner.go:195] Run: systemctl --version
	I1219 11:23:11.327081   22948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 11:23:11.336069   22948 kubeconfig.go:92] found "multinode-783000" server: "https://192.168.170.11:8443"
	I1219 11:23:11.336089   22948 api_server.go:166] Checking apiserver status ...
	I1219 11:23:11.336130   22948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 11:23:11.344557   22948 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1884/cgroup
	I1219 11:23:11.350207   22948 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/podeae769830b4d80377615e8fa0dcd6011/5ed4a3e39982f0c2757f40a7f6ced6c2384b6c6076cf354a456f9c25ebc223f1"
	I1219 11:23:11.350244   22948 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podeae769830b4d80377615e8fa0dcd6011/5ed4a3e39982f0c2757f40a7f6ced6c2384b6c6076cf354a456f9c25ebc223f1/freezer.state
	I1219 11:23:11.356375   22948 api_server.go:204] freezer state: "THAWED"
	I1219 11:23:11.356396   22948 api_server.go:253] Checking apiserver healthz at https://192.168.170.11:8443/healthz ...
	I1219 11:23:11.359729   22948 api_server.go:279] https://192.168.170.11:8443/healthz returned 200:
	ok
	I1219 11:23:11.359740   22948 status.go:421] multinode-783000 apiserver status = Running (err=<nil>)
	I1219 11:23:11.359753   22948 status.go:257] multinode-783000 status: &{Name:multinode-783000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 11:23:11.359764   22948 status.go:255] checking status of multinode-783000-m02 ...
	I1219 11:23:11.360026   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.360047   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.367949   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57595
	I1219 11:23:11.368295   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.368623   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.368634   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.368856   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.368950   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetState
	I1219 11:23:11.369032   22948 main.go:141] libmachine: (multinode-783000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:23:11.369100   22948 main.go:141] libmachine: (multinode-783000-m02) DBG | hyperkit pid from json: 22665
	I1219 11:23:11.370293   22948 status.go:330] multinode-783000-m02 host status = "Running" (err=<nil>)
	I1219 11:23:11.370303   22948 host.go:66] Checking if "multinode-783000-m02" exists ...
	I1219 11:23:11.370571   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.370593   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.378582   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57597
	I1219 11:23:11.378941   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.379325   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.379340   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.379564   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.379673   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetIP
	I1219 11:23:11.379757   22948 host.go:66] Checking if "multinode-783000-m02" exists ...
	I1219 11:23:11.380031   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.380057   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.388130   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57599
	I1219 11:23:11.388487   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.388851   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.388874   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.389101   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.389194   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .DriverName
	I1219 11:23:11.389313   22948 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 11:23:11.389325   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetSSHHostname
	I1219 11:23:11.389418   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetSSHPort
	I1219 11:23:11.389489   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetSSHKeyPath
	I1219 11:23:11.389598   22948 main.go:141] libmachine: (multinode-783000-m02) Calling .GetSSHUsername
	I1219 11:23:11.389672   22948 sshutil.go:53] new ssh client: &{IP:192.168.170.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17837-20429/.minikube/machines/multinode-783000-m02/id_rsa Username:docker}
	I1219 11:23:11.432820   22948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 11:23:11.441097   22948 status.go:257] multinode-783000-m02 status: &{Name:multinode-783000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1219 11:23:11.441113   22948 status.go:255] checking status of multinode-783000-m03 ...
	I1219 11:23:11.441368   22948 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:23:11.441392   22948 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:23:11.449444   22948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57602
	I1219 11:23:11.449797   22948 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:23:11.450136   22948 main.go:141] libmachine: Using API Version  1
	I1219 11:23:11.450150   22948 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:23:11.450366   22948 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:23:11.450468   22948 main.go:141] libmachine: (multinode-783000-m03) Calling .GetState
	I1219 11:23:11.450552   22948 main.go:141] libmachine: (multinode-783000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:23:11.450622   22948 main.go:141] libmachine: (multinode-783000-m03) DBG | hyperkit pid from json: 22743
	I1219 11:23:11.451783   22948 main.go:141] libmachine: (multinode-783000-m03) DBG | hyperkit pid 22743 missing from process table
	I1219 11:23:11.451802   22948 status.go:330] multinode-783000-m03 host status = "Stopped" (err=<nil>)
	I1219 11:23:11.451809   22948 status.go:343] host is not running, skipping remaining checks
	I1219 11:23:11.451820   22948 status.go:257] multinode-783000-m03 status: &{Name:multinode-783000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 node start m03 --alsologtostderr
E1219 11:23:18.527427   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-783000 node start m03 --alsologtostderr: (27.106504029s)
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (164.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-783000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-783000
multinode_test.go:318: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-783000: (18.397076426s)
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-783000 --wait=true -v=8 --alsologtostderr
E1219 11:25:34.675127   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:26:02.368989   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-783000 --wait=true -v=8 --alsologtostderr: (2m25.471087591s)
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-783000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (164.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-darwin-amd64 -p multinode-783000 node delete m03: (2.633171412s)
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 stop
multinode_test.go:342: (dbg) Done: out/minikube-darwin-amd64 -p multinode-783000 stop: (16.339804807s)
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-783000 status: exit status 7 (77.084402ms)

                                                
                                                
-- stdout --
	multinode-783000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-783000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr: exit status 7 (77.737984ms)

                                                
                                                
-- stdout --
	multinode-783000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-783000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 11:26:42.455539   23070 out.go:296] Setting OutFile to fd 1 ...
	I1219 11:26:42.455755   23070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:26:42.455760   23070 out.go:309] Setting ErrFile to fd 2...
	I1219 11:26:42.455764   23070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1219 11:26:42.455958   23070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17837-20429/.minikube/bin
	I1219 11:26:42.456142   23070 out.go:303] Setting JSON to false
	I1219 11:26:42.456165   23070 mustload.go:65] Loading cluster: multinode-783000
	I1219 11:26:42.456208   23070 notify.go:220] Checking for updates...
	I1219 11:26:42.456472   23070 config.go:182] Loaded profile config "multinode-783000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1219 11:26:42.456485   23070 status.go:255] checking status of multinode-783000 ...
	I1219 11:26:42.456854   23070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:26:42.456896   23070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:26:42.465006   23070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57784
	I1219 11:26:42.465360   23070 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:26:42.465786   23070 main.go:141] libmachine: Using API Version  1
	I1219 11:26:42.465797   23070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:26:42.466043   23070 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:26:42.466156   23070 main.go:141] libmachine: (multinode-783000) Calling .GetState
	I1219 11:26:42.466257   23070 main.go:141] libmachine: (multinode-783000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:26:42.466313   23070 main.go:141] libmachine: (multinode-783000) DBG | hyperkit pid from json: 23010
	I1219 11:26:42.467230   23070 main.go:141] libmachine: (multinode-783000) DBG | hyperkit pid 23010 missing from process table
	I1219 11:26:42.467278   23070 status.go:330] multinode-783000 host status = "Stopped" (err=<nil>)
	I1219 11:26:42.467286   23070 status.go:343] host is not running, skipping remaining checks
	I1219 11:26:42.467291   23070 status.go:257] multinode-783000 status: &{Name:multinode-783000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 11:26:42.467315   23070 status.go:255] checking status of multinode-783000-m02 ...
	I1219 11:26:42.467575   23070 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1219 11:26:42.467595   23070 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1219 11:26:42.475361   23070 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57786
	I1219 11:26:42.475728   23070 main.go:141] libmachine: () Calling .GetVersion
	I1219 11:26:42.476071   23070 main.go:141] libmachine: Using API Version  1
	I1219 11:26:42.476088   23070 main.go:141] libmachine: () Calling .SetConfigRaw
	I1219 11:26:42.476283   23070 main.go:141] libmachine: () Calling .GetMachineName
	I1219 11:26:42.476381   23070 main.go:141] libmachine: (multinode-783000-m02) Calling .GetState
	I1219 11:26:42.476452   23070 main.go:141] libmachine: (multinode-783000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1219 11:26:42.476514   23070 main.go:141] libmachine: (multinode-783000-m02) DBG | hyperkit pid from json: 23020
	I1219 11:26:42.477438   23070 main.go:141] libmachine: (multinode-783000-m02) DBG | hyperkit pid 23020 missing from process table
	I1219 11:26:42.477489   23070 status.go:330] multinode-783000-m02 host status = "Stopped" (err=<nil>)
	I1219 11:26:42.477496   23070 status.go:343] host is not running, skipping remaining checks
	I1219 11:26:42.477502   23070 status.go:257] multinode-783000-m02 status: &{Name:multinode-783000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (103.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-783000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1219 11:26:46.967601   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:27:10.713040   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:28:10.021234   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-783000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m42.771586603s)
multinode_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-783000 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (103.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-783000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-783000-m02 --driver=hyperkit 
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-783000-m02 --driver=hyperkit : exit status 14 (486.37107ms)

                                                
                                                
-- stdout --
	* [multinode-783000-m02] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-783000-m02' is duplicated with machine name 'multinode-783000-m02' in profile 'multinode-783000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-783000-m03 --driver=hyperkit 
multinode_test.go:488: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-783000-m03 --driver=hyperkit : (37.585083467s)
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-783000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-783000: exit status 80 (274.86153ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-783000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-783000-m03 already exists in multinode-783000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-783000-m03
multinode_test.go:500: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-783000-m03: (6.773343662s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.18s)

                                                
                                    
x
+
TestPreload (167.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1219 11:30:34.677228   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-520000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m34.702047991s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-520000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-520000 image pull gcr.io/k8s-minikube/busybox: (4.59615659s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-520000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-520000: (8.35408706s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-520000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1219 11:31:46.970385   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-520000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (54.805338086s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-520000 image list
helpers_test.go:175: Cleaning up "test-preload-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-520000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-520000: (5.297924059s)
--- PASS: TestPreload (167.95s)

                                                
                                    
x
+
TestScheduledStopUnix (113.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-584000 --memory=2048 --driver=hyperkit 
E1219 11:32:10.714345   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-584000 --memory=2048 --driver=hyperkit : (41.882859659s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-584000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-584000 -n scheduled-stop-584000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-584000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-584000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-584000 -n scheduled-stop-584000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-584000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-584000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1219 11:33:33.767495   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-584000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-584000: exit status 7 (68.386025ms)

                                                
                                                
-- stdout --
	scheduled-stop-584000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-584000 -n scheduled-stop-584000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-584000 -n scheduled-stop-584000: exit status 7 (67.762138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-584000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-584000
--- PASS: TestScheduledStopUnix (113.60s)

                                                
                                    
x
+
TestSkaffold (123.14s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1848458097 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-596000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-596000 --memory=2600 --driver=hyperkit : (36.22179604s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1848458097 run --minikube-profile skaffold-596000 --kube-context skaffold-596000 --status-check=true --port-forward=false --interactive=false
E1219 11:35:34.679995   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe1848458097 run --minikube-profile skaffold-596000 --kube-context skaffold-596000 --status-check=true --port-forward=false --interactive=false: (1m7.343386286s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-96f5866f-jl2z2" [743845d5-7e31-4fb6-ba4d-209c6e27de38] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002569069s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58bd6b7459-8mxzs" [5b74abd8-93ae-4717-b387-e2422fcd8f55] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003580604s
helpers_test.go:175: Cleaning up "skaffold-596000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-596000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-596000: (6.169882842s)
--- PASS: TestSkaffold (123.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (148.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m15.326694025s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-792000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-792000: (2.238694646s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-792000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-792000 status --format={{.Host}}: exit status 7 (69.488352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (32.468586712s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-792000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (449.964209ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-792000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-792000
	    minikube start -p kubernetes-upgrade-792000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7920002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-792000 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit 
E1219 11:40:34.694667   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:40:48.861560   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:48.867037   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:48.877383   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:48.899126   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:48.940154   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:49.021748   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:49.232692   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:49.554091   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:50.194461   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:51.475981   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-792000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperkit : (32.462327242s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-792000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-792000
E1219 11:40:54.036408   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:40:59.201400   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-792000: (5.275749813s)
--- PASS: TestKubernetesUpgrade (148.35s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (5.19s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17837
- KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current632448025/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current632448025/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current632448025/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current632448025/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (5.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.48s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin
- MINIKUBE_LOCATION=17837
- KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current901608591/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (190.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3140076965.exe start -p stopped-upgrade-051000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:196: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3140076965.exe start -p stopped-upgrade-051000 --memory=2200 --vm-driver=hyperkit : (1m51.394915172s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3140076965.exe -p stopped-upgrade-051000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.6.2.3140076965.exe -p stopped-upgrade-051000 stop: (8.0761173s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-051000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-051000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m10.53885535s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (190.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (502.440293ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-924000] minikube v1.32.0 on Darwin 14.2
	  - MINIKUBE_LOCATION=17837
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17837-20429/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17837-20429/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (19.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-924000 --no-kubernetes --driver=hyperkit : (19.04207466s)
--- PASS: TestNoKubernetes/serial/Start (19.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-051000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-051000: (2.671831738s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.67s)

                                                
                                    
x
+
TestPause/serial/Start (50.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-326000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-326000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (50.77013742s)
--- PASS: TestPause/serial/Start (50.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (132.170694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-924000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-924000: (2.238166493s)
--- PASS: TestNoKubernetes/serial/Stop (2.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-924000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-924000 --driver=hyperkit : (23.740444226s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-924000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (143.338697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E1219 11:44:50.042135   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (50.937994292s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-326000 --alsologtostderr -v=1 --driver=hyperkit 
E1219 11:45:34.696528   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-326000 --alsologtostderr -v=1 --driver=hyperkit : (36.41852401s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (18.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87ffg" [cd25790a-c1f9-4e67-8907-137a1806b650] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-87ffg" [cd25790a-c1f9-4e67-8907-137a1806b650] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 18.003953342s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (18.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-326000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-326000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-326000 --output=json --layout=cluster: exit status 2 (167.649628ms)

                                                
                                                
-- stdout --
	{"Name":"pause-326000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-326000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-326000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-326000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-326000 --alsologtostderr -v=5
E1219 11:45:48.864218   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-326000 --alsologtostderr -v=5: (5.282426445s)
--- PASS: TestPause/serial/DeletePaused (5.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (58.234492522s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E1219 11:46:16.649640   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:46:46.988418   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m23.15330632s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x4lgr" [a579095f-3458-4c2f-8d48-6a9a50127752] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00277309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-btzn2" [570658cc-9b60-494f-b0cb-24cae0aa4a2a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-btzn2" [570658cc-9b60-494f-b0cb-24cae0aa4a2a] Running
E1219 11:47:10.733100   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.003919157s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m3.636205369s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gqp6r" [433eb527-ccde-42f4-8c31-d5d93f59cdc7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004745617s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w74hh" [ca790ca5-4879-4f0c-9c9c-aceda0f91942] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w74hh" [ca790ca5-4879-4f0c-9c9c-aceda0f91942] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.003108421s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (16.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-thl9q" [6041ef1f-72ac-4202-a4cf-17c4c96d0e9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-thl9q" [6041ef1f-72ac-4202-a4cf-17c4c96d0e9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 16.003764818s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (16.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (59.472374957s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (56.829614173s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vbb62" [6790a4b7-2976-4140-b2f1-c842186c12e1] Running
E1219 11:50:13.787689   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004068654s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qs5rz" [90574e81-f235-4dbf-92ad-7598537bd7c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qs5rz" [90574e81-f235-4dbf-92ad-7598537bd7c2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.003850547s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hl2th" [e42517d3-de90-4578-8bcb-90c929104418] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hl2th" [e42517d3-de90-4578-8bcb-90c929104418] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.004231778s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1219 11:50:34.698376   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-377000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (52.063702714s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (157.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-509000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1219 11:51:00.569773   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
E1219 11:51:21.062385   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-509000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m37.805152131s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (157.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-377000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-377000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b47zt" [c3a98cd6-7191-4af1-b147-c4934d78a7d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1219 11:51:47.009936   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:51:51.806556   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:51.811940   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:51.822096   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:51.842206   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:51.883106   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:51.964610   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:52.124718   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:52.446307   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:53.087267   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:51:54.368945   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-b47zt" [c3a98cd6-7191-4af1-b147-c4934d78a7d6] Running
E1219 11:51:56.931091   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.003475575s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-377000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1219 11:52:02.032524   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-377000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1219 11:52:02.052072   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E1219 12:08:14.863050   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 12:08:28.986708   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:08:31.401790   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:08:37.945963   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-978000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1219 11:52:32.773919   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:52:39.609088   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.614763   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.625511   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.645868   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.685958   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.766992   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:39.927173   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:40.248018   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:40.888172   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:42.169708   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:44.731575   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:52:49.853003   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:53:00.093695   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:53:13.735188   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:53:20.574816   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:53:23.955441   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-978000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (1m8.652642773s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-978000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c5ae0f5e-95dd-4210-a2cf-ee597ff45298] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c5ae0f5e-95dd-4210-a2cf-ee597ff45298] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004352925s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-978000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-509000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8c56b241-0d80-4baf-9ee9-fa82eab52b57] Pending
helpers_test.go:344: "busybox" [8c56b241-0d80-4baf-9ee9-fa82eab52b57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8c56b241-0d80-4baf-9ee9-fa82eab52b57] Running
E1219 11:53:37.776006   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:53:37.941538   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:37.946909   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:37.957020   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:37.977198   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:38.017755   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:38.098016   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:38.258167   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:38.578311   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:53:39.218475   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003254995s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-509000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-978000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1219 11:53:40.498681   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-978000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-978000 --alsologtostderr -v=3
E1219 11:53:43.060260   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-978000 --alsologtostderr -v=3: (8.284082114s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-509000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-509000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-509000 --alsologtostderr -v=3
E1219 11:53:48.180617   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-509000 --alsologtostderr -v=3: (8.229956076s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-978000 -n no-preload-978000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-978000 -n no-preload-978000: exit status 7 (71.183625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-978000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (296.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-978000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-978000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (4m56.253961842s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-978000 -n no-preload-978000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (296.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-509000 -n old-k8s-version-509000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-509000 -n old-k8s-version-509000: exit status 7 (69.905642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-509000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (498.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-509000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1219 11:53:58.421413   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:54:01.535497   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:54:18.901869   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:54:35.658073   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:54:59.862593   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:55:12.159023   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.165262   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.175775   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.195874   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.237357   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.317778   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.479529   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:12.799892   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:13.441120   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:14.722604   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:17.284252   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:20.091653   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.097356   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.107628   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.128761   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.169151   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.250791   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.411122   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:20.731719   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:21.372425   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:22.404797   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:22.652806   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:23.456701   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:55:25.213060   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:30.334153   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:32.645107   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:55:34.723269   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 11:55:40.112399   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
E1219 11:55:40.574553   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:55:48.892767   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:55:53.126485   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:56:01.056759   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:56:07.797492   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
E1219 11:56:21.783653   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:56:34.087083   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:56:42.017491   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:56:45.780506   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:45.786350   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:45.797137   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:45.818718   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:45.858852   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:45.938970   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:46.099866   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:46.420281   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:47.016541   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 11:56:47.060453   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:48.340605   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:50.901901   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:56:51.810168   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:56:56.023591   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:57:06.263820   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:57:10.760725   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
E1219 11:57:12.037927   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 11:57:19.500134   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 11:57:26.744176   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:57:39.612397   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:57:56.040410   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 11:58:03.938541   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 11:58:07.299983   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
E1219 11:58:07.704738   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 11:58:37.945997   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-509000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (8m18.307444493s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-509000 -n old-k8s-version-509000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (498.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qrbmp" [503e736c-6581-416f-ba10-76abd0c1bee3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002617142s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qrbmp" [503e736c-6581-416f-ba10-76abd0c1bee3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002801495s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-978000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-978000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-978000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-978000 -n no-preload-978000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-978000 -n no-preload-978000: exit status 2 (169.314737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-978000 -n no-preload-978000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-978000 -n no-preload-978000: exit status 2 (160.790949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-978000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-978000 -n no-preload-978000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-978000 -n no-preload-978000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-947000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1219 11:59:05.625738   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 11:59:29.627010   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-947000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (51.13499113s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-947000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e4e6944-c47d-4867-b4c9-b55f9bb166d7] Pending
helpers_test.go:344: "busybox" [0e4e6944-c47d-4867-b4c9-b55f9bb166d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e4e6944-c47d-4867-b4c9-b55f9bb166d7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.006881269s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-947000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-947000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-947000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-947000 --alsologtostderr -v=3
E1219 12:00:12.164009   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-947000 --alsologtostderr -v=3: (8.246321915s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-947000 -n embed-certs-947000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-947000 -n embed-certs-947000: exit status 7 (70.579999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-947000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (309.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-947000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4
E1219 12:00:20.095464   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 12:00:34.727419   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 12:00:39.882619   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 12:00:40.114842   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
E1219 12:00:47.781532   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
E1219 12:00:48.895730   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
E1219 12:01:30.074464   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 12:01:45.784421   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 12:01:47.019027   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
E1219 12:01:51.813774   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 12:02:10.765156   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-947000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.4: (5m9.482925316s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-947000 -n embed-certs-947000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (309.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8qbbs" [ca8195b2-1d4c-4697-9e78-91a44d9d517a] Running
E1219 12:02:13.469093   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003266175s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-8qbbs" [ca8195b2-1d4c-4697-9e78-91a44d9d517a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00369556s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-509000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-509000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-509000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-509000 -n old-k8s-version-509000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-509000 -n old-k8s-version-509000: exit status 2 (175.94056ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-509000 -n old-k8s-version-509000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-509000 -n old-k8s-version-509000: exit status 2 (167.930496ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-509000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-509000 -n old-k8s-version-509000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-509000 -n old-k8s-version-509000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-438000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1219 12:02:39.616726   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-438000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (53.299523768s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-438000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ba56421-8ccd-4a3d-b545-9e1632ccb814] Pending
helpers_test.go:344: "busybox" [2ba56421-8ccd-4a3d-b545-9e1632ccb814] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1219 12:03:28.989465   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:28.994924   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.005569   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.026203   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.066328   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.207900   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.369713   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:29.690318   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:30.331450   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2ba56421-8ccd-4a3d-b545-9e1632ccb814] Running
E1219 12:03:31.404147   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.410034   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.420663   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.441380   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.481826   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.562474   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:31.611854   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:31.722792   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:32.042939   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:32.683116   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:33.963984   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:03:34.173058   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004881618s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-438000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-438000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1219 12:03:36.525610   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-438000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-438000 --alsologtostderr -v=3
E1219 12:03:37.949060   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/custom-flannel-377000/client.crt: no such file or directory
E1219 12:03:39.293556   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:41.646213   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-438000 --alsologtostderr -v=3: (8.259404311s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000: exit status 7 (71.158615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-438000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-438000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4
E1219 12:03:49.534501   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:03:51.886680   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:04:10.015149   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:04:12.367514   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:04:50.977009   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:04:53.328388   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:05:12.166453   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/flannel-377000/client.crt: no such file or directory
E1219 12:05:20.096959   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/bridge-377000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-438000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.4: (4m58.970767172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (299.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pck26" [57c60b1c-43c3-430a-80f0-dde000fd159c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1219 12:05:34.731533   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/ingress-addon-legacy-943000/client.crt: no such file or directory
E1219 12:05:40.119815   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pck26" [57c60b1c-43c3-430a-80f0-dde000fd159c] Running
E1219 12:05:48.900140   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/skaffold-596000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 23.002430697s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (23.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pck26" [57c60b1c-43c3-430a-80f0-dde000fd159c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002942443s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-947000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-947000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-947000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-947000 -n embed-certs-947000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-947000 -n embed-certs-947000: exit status 2 (157.26311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-947000 -n embed-certs-947000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-947000 -n embed-certs-947000: exit status 2 (158.463244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-947000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-947000 -n embed-certs-947000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-947000 -n embed-certs-947000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-798000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1219 12:06:12.896973   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
E1219 12:06:15.249235   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/old-k8s-version-509000/client.crt: no such file or directory
E1219 12:06:45.782498   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kubenet-377000/client.crt: no such file or directory
E1219 12:06:47.018243   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/addons-233000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-798000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (47.009622748s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-798000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-798000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.245276982s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-798000 --alsologtostderr -v=3
E1219 12:06:51.812108   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/kindnet-377000/client.crt: no such file or directory
E1219 12:06:53.816658   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-798000 --alsologtostderr -v=3: (8.230450268s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-798000 -n newest-cni-798000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-798000 -n newest-cni-798000: exit status 7 (68.291592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-798000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-798000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2
E1219 12:07:03.161593   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/auto-377000/client.crt: no such file or directory
E1219 12:07:10.762774   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/functional-795000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-798000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.29.0-rc.2: (38.709897002s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-798000 -n newest-cni-798000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-798000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-798000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-798000 -n newest-cni-798000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-798000 -n newest-cni-798000: exit status 2 (158.902751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-798000 -n newest-cni-798000
E1219 12:07:39.614851   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/calico-377000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-798000 -n newest-cni-798000: exit status 2 (159.767044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-798000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-798000 -n newest-cni-798000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-798000 -n newest-cni-798000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t2pjr" [3c03de96-e289-4a7f-9255-e922c00ff6de] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00242201s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-t2pjr" [3c03de96-e289-4a7f-9255-e922c00ff6de] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003544306s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-438000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-438000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-438000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
E1219 12:08:56.734828   20867 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17837-20429/.minikube/profiles/no-preload-978000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000: exit status 2 (168.041346ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000: exit status 2 (165.994462ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-438000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-438000 -n default-k8s-diff-port-438000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.93s)

                                                
                                    

Test skip (21/314)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-377000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-377000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-377000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-377000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-377000"

                                                
                                                
----------------------- debugLogs end: cilium-377000 [took: 6.299077002s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-377000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-377000
--- SKIP: TestNetworkPlugins/group/cilium (6.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-418000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard