Test Report: Hyperkit_macOS 17297

                    
                      d70abdd8c088cadcf8720531a75f8262065eb1b0:2023-09-25:31157
                    
                

Test fail (3/318)

Order failed test Duration
41 TestForceSystemdEnv 20.68
194 TestMinikubeProfile 66.42
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.69
x
+
TestForceSystemdEnv (20.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-992000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-992000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 90 (15.135769475s)

                                                
                                                
-- stdout --
	* [force-systemd-env-992000] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting control plane node force-systemd-env-992000 in cluster force-systemd-env-992000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 04:02:44.418461    5073 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:02:44.419003    5073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:02:44.419014    5073 out.go:309] Setting ErrFile to fd 2...
	I0925 04:02:44.419021    5073 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:02:44.419622    5073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 04:02:44.421227    5073 out.go:303] Setting JSON to false
	I0925 04:02:44.441626    5073 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1938,"bootTime":1695637826,"procs":433,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 04:02:44.441743    5073 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:02:44.481715    5073 out.go:177] * [force-systemd-env-992000] minikube v1.31.2 on Darwin 13.6
	I0925 04:02:44.539469    5073 notify.go:220] Checking for updates...
	I0925 04:02:44.561255    5073 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:02:44.603211    5073 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 04:02:44.647089    5073 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 04:02:44.695040    5073 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:02:44.737190    5073 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 04:02:44.779323    5073 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0925 04:02:44.801305    5073 config.go:182] Loaded profile config "offline-docker-993000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:02:44.801474    5073 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:02:44.831186    5073 out.go:177] * Using the hyperkit driver based on user configuration
	I0925 04:02:44.888125    5073 start.go:298] selected driver: hyperkit
	I0925 04:02:44.888153    5073 start.go:902] validating driver "hyperkit" against <nil>
	I0925 04:02:44.888218    5073 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:02:44.892391    5073 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:02:44.892519    5073 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17297-1019/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0925 04:02:44.899425    5073 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0925 04:02:44.903006    5073 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:02:44.903026    5073 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0925 04:02:44.903051    5073 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 04:02:44.903264    5073 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 04:02:44.903289    5073 cni.go:84] Creating CNI manager for ""
	I0925 04:02:44.903306    5073 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:02:44.903317    5073 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 04:02:44.903323    5073 start_flags.go:321] config:
	{Name:force-systemd-env-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:02:44.903469    5073 iso.go:125] acquiring lock: {Name:mk5685b8103aa0f952a2e44c47bdd1882fdd0bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:02:44.980925    5073 out.go:177] * Starting control plane node force-systemd-env-992000 in cluster force-systemd-env-992000
	I0925 04:02:45.002391    5073 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:02:45.002481    5073 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 04:02:45.002515    5073 cache.go:57] Caching tarball of preloaded images
	I0925 04:02:45.002741    5073 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 04:02:45.002764    5073 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:02:45.002928    5073 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/force-systemd-env-992000/config.json ...
	I0925 04:02:45.002980    5073 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/force-systemd-env-992000/config.json: {Name:mkca4eaef89c2aaded0c143c275cfdd38c807152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:02:45.003610    5073 start.go:365] acquiring machines lock for force-systemd-env-992000: {Name:mkc5a9c335a363bfa8f942e55cb9e7e0d08ada9f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:02:45.003728    5073 start.go:369] acquired machines lock for "force-systemd-env-992000" in 87.656µs
	I0925 04:02:45.003778    5073 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:force-systemd-env-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:02:45.003876    5073 start.go:125] createHost starting for "" (driver="hyperkit")
	I0925 04:02:45.063097    5073 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0925 04:02:45.063544    5073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:02:45.063619    5073 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:02:45.072242    5073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52233
	I0925 04:02:45.072600    5073 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:02:45.073027    5073 main.go:141] libmachine: Using API Version  1
	I0925 04:02:45.073040    5073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:02:45.073288    5073 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:02:45.073395    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetMachineName
	I0925 04:02:45.073480    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:45.073589    5073 start.go:159] libmachine.API.Create for "force-systemd-env-992000" (driver="hyperkit")
	I0925 04:02:45.073618    5073 client.go:168] LocalClient.Create starting
	I0925 04:02:45.073658    5073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem
	I0925 04:02:45.073708    5073 main.go:141] libmachine: Decoding PEM data...
	I0925 04:02:45.073747    5073 main.go:141] libmachine: Parsing certificate...
	I0925 04:02:45.073804    5073 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem
	I0925 04:02:45.073835    5073 main.go:141] libmachine: Decoding PEM data...
	I0925 04:02:45.073844    5073 main.go:141] libmachine: Parsing certificate...
	I0925 04:02:45.073860    5073 main.go:141] libmachine: Running pre-create checks...
	I0925 04:02:45.073866    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .PreCreateCheck
	I0925 04:02:45.073945    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:45.074144    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetConfigRaw
	I0925 04:02:45.074536    5073 main.go:141] libmachine: Creating machine...
	I0925 04:02:45.074545    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .Create
	I0925 04:02:45.074615    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:45.074736    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | I0925 04:02:45.074603    5081 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 04:02:45.074808    5073 main.go:141] libmachine: (force-systemd-env-992000) Downloading /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1019/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0925 04:02:45.230088    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | I0925 04:02:45.230026    5081 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/id_rsa...
	I0925 04:02:45.434137    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | I0925 04:02:45.434037    5081 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/force-systemd-env-992000.rawdisk...
	I0925 04:02:45.434152    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Writing magic tar header
	I0925 04:02:45.434162    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Writing SSH key tar header
	I0925 04:02:45.434636    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | I0925 04:02:45.434602    5081 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000 ...
	I0925 04:02:45.762583    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:45.762668    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/hyperkit.pid
	I0925 04:02:45.762694    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Using UUID 0e241a66-5b93-11ee-bc5f-149d997fca88
	I0925 04:02:45.791789    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Generated MAC ee:42:fa:c9:ff:a3
	I0925 04:02:45.791817    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-992000
	I0925 04:02:45.791850    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0e241a66-5b93-11ee-bc5f-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0925 04:02:45.791879    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"0e241a66-5b93-11ee-bc5f-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0925 04:02:45.791943    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "0e241a66-5b93-11ee-bc5f-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/force-systemd-env-992000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-sys
temd-env-992000/bzimage,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-992000"}
	I0925 04:02:45.791975    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 0e241a66-5b93-11ee-bc5f-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/force-systemd-env-992000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/console-ring -f kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/bzimage,/Users/jenkins/minikube-integration/17
297-1019/.minikube/machines/force-systemd-env-992000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-992000"
	I0925 04:02:45.791990    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0925 04:02:45.794661    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 DEBUG: hyperkit: Pid is 5082
	I0925 04:02:45.795038    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 0
	I0925 04:02:45.795065    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:45.795124    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:45.796029    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:45.796121    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0925 04:02:45.796154    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:3e:6d:bb:ef:93:69 ID:1,3e:6d:bb:ef:93:69 Lease:0x6512ba49}
	I0925 04:02:45.796185    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:fe:b2:aa:f3:66:b3 ID:1,fe:b2:aa:f3:66:b3 Lease:0x6512b9dd}
	I0925 04:02:45.796211    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:a6:6a:27:28:f5:78 ID:1,a6:6a:27:28:f5:78 Lease:0x6512b973}
	I0925 04:02:45.796260    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:c6:6c:a1:83:fe:48 ID:1,c6:6c:a1:83:fe:48 Lease:0x6512b923}
	I0925 04:02:45.796276    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:96:fc:b9:cf:b2:a5 ID:1,96:fc:b9:cf:b2:a5 Lease:0x6511673b}
	I0925 04:02:45.796286    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:22:64:90:f4:c3:6a ID:1,22:64:90:f4:c3:6a Lease:0x651166b2}
	I0925 04:02:45.796293    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:32:8:c1:c2:f0:84 ID:1,32:8:c1:c2:f0:84 Lease:0x6512b881}
	I0925 04:02:45.796302    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:d2:8c:57:41:3a:9b ID:1,d2:8c:57:41:3a:9b Lease:0x6512b84d}
	I0925 04:02:45.796317    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:4a:e6:41:59:5a:b3 ID:1,4a:e6:41:59:5a:b3 Lease:0x65116582}
	I0925 04:02:45.796329    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:16:5c:94:4a:13:17 ID:1,16:5c:94:4a:13:17 Lease:0x6511656d}
	I0925 04:02:45.796337    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 04:02:45.796357    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 04:02:45.796371    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 04:02:45.796380    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 04:02:45.796388    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 04:02:45.796409    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 04:02:45.796429    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 04:02:45.796467    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 04:02:45.801224    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0925 04:02:45.808720    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0925 04:02:45.809580    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 04:02:45.809619    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 04:02:45.809638    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 04:02:45.809653    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:45 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 04:02:46.164905    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0925 04:02:46.164926    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0925 04:02:46.269057    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 04:02:46.269086    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 04:02:46.269115    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 04:02:46.269136    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 04:02:46.269937    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0925 04:02:46.269947    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:46 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0925 04:02:47.797898    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 1
	I0925 04:02:47.797916    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:47.797961    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:47.798839    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:47.798911    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0925 04:02:47.798924    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:3e:6d:bb:ef:93:69 ID:1,3e:6d:bb:ef:93:69 Lease:0x6512ba49}
	I0925 04:02:47.798934    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:fe:b2:aa:f3:66:b3 ID:1,fe:b2:aa:f3:66:b3 Lease:0x6512b9dd}
	I0925 04:02:47.798943    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:a6:6a:27:28:f5:78 ID:1,a6:6a:27:28:f5:78 Lease:0x6512b973}
	I0925 04:02:47.798956    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:c6:6c:a1:83:fe:48 ID:1,c6:6c:a1:83:fe:48 Lease:0x6512b923}
	I0925 04:02:47.798968    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:96:fc:b9:cf:b2:a5 ID:1,96:fc:b9:cf:b2:a5 Lease:0x6511673b}
	I0925 04:02:47.798978    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:22:64:90:f4:c3:6a ID:1,22:64:90:f4:c3:6a Lease:0x651166b2}
	I0925 04:02:47.798986    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:32:8:c1:c2:f0:84 ID:1,32:8:c1:c2:f0:84 Lease:0x6512b881}
	I0925 04:02:47.798997    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:d2:8c:57:41:3a:9b ID:1,d2:8c:57:41:3a:9b Lease:0x6512b84d}
	I0925 04:02:47.799021    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:4a:e6:41:59:5a:b3 ID:1,4a:e6:41:59:5a:b3 Lease:0x65116582}
	I0925 04:02:47.799034    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:16:5c:94:4a:13:17 ID:1,16:5c:94:4a:13:17 Lease:0x6511656d}
	I0925 04:02:47.799042    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 04:02:47.799053    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 04:02:47.799065    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 04:02:47.799076    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 04:02:47.799090    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 04:02:47.799098    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 04:02:47.799107    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 04:02:47.799117    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 04:02:49.799640    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 2
	I0925 04:02:49.799658    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:49.799771    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:49.800586    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:49.800649    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0925 04:02:49.800658    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:3e:6d:bb:ef:93:69 ID:1,3e:6d:bb:ef:93:69 Lease:0x6512ba49}
	I0925 04:02:49.800667    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:fe:b2:aa:f3:66:b3 ID:1,fe:b2:aa:f3:66:b3 Lease:0x6512b9dd}
	I0925 04:02:49.800674    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:a6:6a:27:28:f5:78 ID:1,a6:6a:27:28:f5:78 Lease:0x6512b973}
	I0925 04:02:49.800696    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:c6:6c:a1:83:fe:48 ID:1,c6:6c:a1:83:fe:48 Lease:0x6512b923}
	I0925 04:02:49.800714    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:96:fc:b9:cf:b2:a5 ID:1,96:fc:b9:cf:b2:a5 Lease:0x6511673b}
	I0925 04:02:49.800725    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:22:64:90:f4:c3:6a ID:1,22:64:90:f4:c3:6a Lease:0x651166b2}
	I0925 04:02:49.800734    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:32:8:c1:c2:f0:84 ID:1,32:8:c1:c2:f0:84 Lease:0x6512b881}
	I0925 04:02:49.800745    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:d2:8c:57:41:3a:9b ID:1,d2:8c:57:41:3a:9b Lease:0x6512b84d}
	I0925 04:02:49.800754    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:4a:e6:41:59:5a:b3 ID:1,4a:e6:41:59:5a:b3 Lease:0x65116582}
	I0925 04:02:49.800761    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:16:5c:94:4a:13:17 ID:1,16:5c:94:4a:13:17 Lease:0x6511656d}
	I0925 04:02:49.800776    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 04:02:49.800789    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 04:02:49.800800    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 04:02:49.800815    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 04:02:49.800828    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 04:02:49.800837    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 04:02:49.800846    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 04:02:49.800856    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 04:02:51.238980    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:51 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0925 04:02:51.239057    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:51 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0925 04:02:51.239069    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | 2023/09/25 04:02:51 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0925 04:02:51.802159    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 3
	I0925 04:02:51.802179    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:51.802236    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:51.803211    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:51.803286    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0925 04:02:51.803309    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:3e:6d:bb:ef:93:69 ID:1,3e:6d:bb:ef:93:69 Lease:0x6512ba49}
	I0925 04:02:51.803323    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:fe:b2:aa:f3:66:b3 ID:1,fe:b2:aa:f3:66:b3 Lease:0x6512b9dd}
	I0925 04:02:51.803362    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:a6:6a:27:28:f5:78 ID:1,a6:6a:27:28:f5:78 Lease:0x6512b973}
	I0925 04:02:51.803408    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:c6:6c:a1:83:fe:48 ID:1,c6:6c:a1:83:fe:48 Lease:0x6512b923}
	I0925 04:02:51.803424    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:96:fc:b9:cf:b2:a5 ID:1,96:fc:b9:cf:b2:a5 Lease:0x6511673b}
	I0925 04:02:51.803434    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:22:64:90:f4:c3:6a ID:1,22:64:90:f4:c3:6a Lease:0x651166b2}
	I0925 04:02:51.803448    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:32:8:c1:c2:f0:84 ID:1,32:8:c1:c2:f0:84 Lease:0x6512b881}
	I0925 04:02:51.803474    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:d2:8c:57:41:3a:9b ID:1,d2:8c:57:41:3a:9b Lease:0x6512b84d}
	I0925 04:02:51.803492    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:4a:e6:41:59:5a:b3 ID:1,4a:e6:41:59:5a:b3 Lease:0x65116582}
	I0925 04:02:51.803506    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:16:5c:94:4a:13:17 ID:1,16:5c:94:4a:13:17 Lease:0x6511656d}
	I0925 04:02:51.803520    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 04:02:51.803550    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 04:02:51.803566    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 04:02:51.803579    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 04:02:51.803595    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 04:02:51.803609    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 04:02:51.803629    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 04:02:51.803641    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 04:02:53.804673    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 4
	I0925 04:02:53.804696    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:53.804982    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:53.806038    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:53.806108    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0925 04:02:53.806167    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:3e:6d:bb:ef:93:69 ID:1,3e:6d:bb:ef:93:69 Lease:0x6512ba49}
	I0925 04:02:53.806188    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:fe:b2:aa:f3:66:b3 ID:1,fe:b2:aa:f3:66:b3 Lease:0x6512b9dd}
	I0925 04:02:53.806208    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:a6:6a:27:28:f5:78 ID:1,a6:6a:27:28:f5:78 Lease:0x6512b973}
	I0925 04:02:53.806227    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:c6:6c:a1:83:fe:48 ID:1,c6:6c:a1:83:fe:48 Lease:0x6512b923}
	I0925 04:02:53.806242    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:96:fc:b9:cf:b2:a5 ID:1,96:fc:b9:cf:b2:a5 Lease:0x6511673b}
	I0925 04:02:53.806256    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:22:64:90:f4:c3:6a ID:1,22:64:90:f4:c3:6a Lease:0x651166b2}
	I0925 04:02:53.806276    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:32:8:c1:c2:f0:84 ID:1,32:8:c1:c2:f0:84 Lease:0x6512b881}
	I0925 04:02:53.806292    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:d2:8c:57:41:3a:9b ID:1,d2:8c:57:41:3a:9b Lease:0x6512b84d}
	I0925 04:02:53.806306    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:4a:e6:41:59:5a:b3 ID:1,4a:e6:41:59:5a:b3 Lease:0x65116582}
	I0925 04:02:53.806319    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:16:5c:94:4a:13:17 ID:1,16:5c:94:4a:13:17 Lease:0x6511656d}
	I0925 04:02:53.806332    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 04:02:53.806355    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 04:02:53.806378    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 04:02:53.806395    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 04:02:53.806410    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 04:02:53.806423    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 04:02:53.806436    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 04:02:53.806452    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 04:02:55.806854    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Attempt 5
	I0925 04:02:55.806874    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:55.806973    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:55.807946    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Searching for ee:42:fa:c9:ff:a3 in /var/db/dhcpd_leases ...
	I0925 04:02:55.808019    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found 19 entries in /var/db/dhcpd_leases!
	I0925 04:02:55.808030    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ee:42:fa:c9:ff:a3 ID:1,ee:42:fa:c9:ff:a3 Lease:0x6512ba5e}
	I0925 04:02:55.808039    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | Found match: ee:42:fa:c9:ff:a3
	I0925 04:02:55.808049    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | IP: 192.168.64.20
	I0925 04:02:55.808114    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetConfigRaw
	I0925 04:02:55.808616    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:55.808705    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:55.808781    5073 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0925 04:02:55.808789    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetState
	I0925 04:02:55.808869    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:02:55.808921    5073 main.go:141] libmachine: (force-systemd-env-992000) DBG | hyperkit pid from json: 5082
	I0925 04:02:55.809680    5073 main.go:141] libmachine: Detecting operating system of created instance...
	I0925 04:02:55.809693    5073 main.go:141] libmachine: Waiting for SSH to be available...
	I0925 04:02:55.809698    5073 main.go:141] libmachine: Getting to WaitForSSH function...
	I0925 04:02:55.809706    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:55.809796    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:55.809884    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.809960    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.810035    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:55.810151    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:55.810464    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:55.810473    5073 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0925 04:02:55.865795    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:02:55.865807    5073 main.go:141] libmachine: Detecting the provisioner...
	I0925 04:02:55.865813    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:55.865950    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:55.866061    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.866160    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.866251    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:55.866398    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:55.866663    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:55.866672    5073 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0925 04:02:55.922121    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0925 04:02:55.922191    5073 main.go:141] libmachine: found compatible host: buildroot
	I0925 04:02:55.922198    5073 main.go:141] libmachine: Provisioning with buildroot...
	I0925 04:02:55.922205    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetMachineName
	I0925 04:02:55.922359    5073 buildroot.go:166] provisioning hostname "force-systemd-env-992000"
	I0925 04:02:55.922373    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetMachineName
	I0925 04:02:55.922465    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:55.922555    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:55.922647    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.922718    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.922803    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:55.922932    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:55.923183    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:55.923193    5073 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-992000 && echo "force-systemd-env-992000" | sudo tee /etc/hostname
	I0925 04:02:55.987414    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-992000
	
	I0925 04:02:55.987436    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:55.987595    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:55.987684    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.987755    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:55.987834    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:55.987986    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:55.988236    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:55.988249    5073 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-992000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-992000/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-992000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 04:02:56.050038    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:02:56.050057    5073 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1019/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1019/.minikube}
	I0925 04:02:56.050072    5073 buildroot.go:174] setting up certificates
	I0925 04:02:56.050083    5073 provision.go:83] configureAuth start
	I0925 04:02:56.050091    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetMachineName
	I0925 04:02:56.050214    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetIP
	I0925 04:02:56.050323    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.050418    5073 provision.go:138] copyHostCerts
	I0925 04:02:56.050453    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem
	I0925 04:02:56.050504    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem, removing ...
	I0925 04:02:56.050515    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem
	I0925 04:02:56.050646    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem (1078 bytes)
	I0925 04:02:56.050842    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem
	I0925 04:02:56.050875    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem, removing ...
	I0925 04:02:56.050880    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem
	I0925 04:02:56.050954    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem (1123 bytes)
	I0925 04:02:56.051074    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem
	I0925 04:02:56.051104    5073 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem, removing ...
	I0925 04:02:56.051109    5073 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem
	I0925 04:02:56.051180    5073 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem (1675 bytes)
	I0925 04:02:56.051300    5073 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-992000 san=[192.168.64.20 192.168.64.20 localhost 127.0.0.1 minikube force-systemd-env-992000]
	I0925 04:02:56.202499    5073 provision.go:172] copyRemoteCerts
	I0925 04:02:56.202561    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 04:02:56.202579    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.202756    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.202863    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.202968    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.203059    5073 sshutil.go:53] new ssh client: &{IP:192.168.64.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/id_rsa Username:docker}
	I0925 04:02:56.236610    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0925 04:02:56.236697    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 04:02:56.252750    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0925 04:02:56.252814    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0925 04:02:56.269078    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0925 04:02:56.269147    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 04:02:56.285980    5073 provision.go:86] duration metric: configureAuth took 235.882142ms
	I0925 04:02:56.285993    5073 buildroot.go:189] setting minikube options for container-runtime
	I0925 04:02:56.286133    5073 config.go:182] Loaded profile config "force-systemd-env-992000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:02:56.286151    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:56.286279    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.286366    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.286460    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.286555    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.286635    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.286755    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:56.286992    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:56.287000    5073 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 04:02:56.343048    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 04:02:56.343061    5073 buildroot.go:70] root file system type: tmpfs
	I0925 04:02:56.343148    5073 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 04:02:56.343164    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.343298    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.343399    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.343487    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.343577    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.343700    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:56.343977    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:56.344037    5073 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 04:02:56.409199    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 04:02:56.409224    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.409362    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.409463    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.409568    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.409671    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.409815    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:56.410081    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:56.410094    5073 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 04:02:56.925700    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 04:02:56.925727    5073 main.go:141] libmachine: Checking connection to Docker...
	I0925 04:02:56.925734    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetURL
	I0925 04:02:56.925880    5073 main.go:141] libmachine: Docker is up and running!
	I0925 04:02:56.925887    5073 main.go:141] libmachine: Reticulating splines...
	I0925 04:02:56.925906    5073 client.go:171] LocalClient.Create took 11.852230342s
	I0925 04:02:56.925918    5073 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-992000" took 11.852292569s
	I0925 04:02:56.925927    5073 start.go:300] post-start starting for "force-systemd-env-992000" (driver="hyperkit")
	I0925 04:02:56.925936    5073 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 04:02:56.925947    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:56.926089    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 04:02:56.926102    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.926193    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.926290    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.926377    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.926499    5073 sshutil.go:53] new ssh client: &{IP:192.168.64.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/id_rsa Username:docker}
	I0925 04:02:56.959828    5073 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 04:02:56.962626    5073 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 04:02:56.962642    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/addons for local assets ...
	I0925 04:02:56.962738    5073 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/files for local assets ...
	I0925 04:02:56.962904    5073 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem -> 14872.pem in /etc/ssl/certs
	I0925 04:02:56.962911    5073 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem -> /etc/ssl/certs/14872.pem
	I0925 04:02:56.963105    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 04:02:56.969079    5073 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem --> /etc/ssl/certs/14872.pem (1708 bytes)
	I0925 04:02:56.986706    5073 start.go:303] post-start completed in 60.768026ms
	I0925 04:02:56.986746    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetConfigRaw
	I0925 04:02:56.987404    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetIP
	I0925 04:02:56.987580    5073 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/force-systemd-env-992000/config.json ...
	I0925 04:02:56.987897    5073 start.go:128] duration metric: createHost completed in 11.983969248s
	I0925 04:02:56.987914    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:56.988022    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:56.988109    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.988196    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:56.988279    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:56.988387    5073 main.go:141] libmachine: Using SSH client type: native
	I0925 04:02:56.988636    5073 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.20 22 <nil> <nil>}
	I0925 04:02:56.988646    5073 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0925 04:02:57.045470    5073 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695639776.921420990
	
	I0925 04:02:57.045482    5073 fix.go:206] guest clock: 1695639776.921420990
	I0925 04:02:57.045488    5073 fix.go:219] Guest: 2023-09-25 04:02:56.92142099 -0700 PDT Remote: 2023-09-25 04:02:56.987907 -0700 PDT m=+12.599334362 (delta=-66.48601ms)
	I0925 04:02:57.045504    5073 fix.go:190] guest clock delta is within tolerance: -66.48601ms
	I0925 04:02:57.045508    5073 start.go:83] releasing machines lock for "force-systemd-env-992000", held for 12.041729584s
	I0925 04:02:57.045527    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:57.045679    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetIP
	I0925 04:02:57.045780    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:57.046123    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:57.046246    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .DriverName
	I0925 04:02:57.046356    5073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 04:02:57.046392    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:57.046414    5073 ssh_runner.go:195] Run: cat /version.json
	I0925 04:02:57.046425    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHHostname
	I0925 04:02:57.046515    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:57.046535    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHPort
	I0925 04:02:57.046607    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:57.046630    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHKeyPath
	I0925 04:02:57.046727    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:57.046761    5073 main.go:141] libmachine: (force-systemd-env-992000) Calling .GetSSHUsername
	I0925 04:02:57.046829    5073 sshutil.go:53] new ssh client: &{IP:192.168.64.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/id_rsa Username:docker}
	I0925 04:02:57.046878    5073 sshutil.go:53] new ssh client: &{IP:192.168.64.20 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/force-systemd-env-992000/id_rsa Username:docker}
	I0925 04:02:57.119220    5073 ssh_runner.go:195] Run: systemctl --version
	I0925 04:02:57.123307    5073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 04:02:57.127129    5073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 04:02:57.127211    5073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 04:02:57.139173    5073 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 04:02:57.139189    5073 start.go:469] detecting cgroup driver to use...
	I0925 04:02:57.139201    5073 start.go:473] using "systemd" cgroup driver as enforced via flags
	I0925 04:02:57.139312    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:02:57.153911    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 04:02:57.161206    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 04:02:57.168371    5073 containerd.go:145] configuring containerd to use "systemd" as cgroup driver...
	I0925 04:02:57.168417    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0925 04:02:57.175782    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:02:57.183517    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 04:02:57.191300    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:02:57.198881    5073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 04:02:57.206804    5073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 04:02:57.214540    5073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 04:02:57.221464    5073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 04:02:57.228624    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:02:57.319372    5073 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 04:02:57.332381    5073 start.go:469] detecting cgroup driver to use...
	I0925 04:02:57.332405    5073 start.go:473] using "systemd" cgroup driver as enforced via flags
	I0925 04:02:57.332503    5073 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 04:02:57.347258    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:02:57.362479    5073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 04:02:57.382463    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:02:57.391096    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:02:57.399651    5073 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 04:02:57.429019    5073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:02:57.438386    5073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:02:57.451003    5073 ssh_runner.go:195] Run: which cri-dockerd
	I0925 04:02:57.453681    5073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 04:02:57.459854    5073 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 04:02:57.471139    5073 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 04:02:57.558145    5073 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 04:02:57.648041    5073 docker.go:554] configuring docker to use "systemd" as cgroup driver...
	I0925 04:02:57.648139    5073 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0925 04:02:57.659202    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:02:57.745289    5073 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:02:58.978378    5073 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.23306488s)
	I0925 04:02:58.978447    5073 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:02:59.068913    5073 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 04:02:59.162017    5073 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:02:59.259928    5073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:02:59.357712    5073 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 04:02:59.406861    5073 out.go:177] 
	W0925 04:02:59.427898    5073 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0925 04:02:59.427926    5073 out.go:239] * 
	* 
	W0925 04:02:59.429175    5073 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 04:02:59.490918    5073 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-992000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-992000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-25 04:02:59.673409 -0700 PDT m=+1802.131389625
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-992000 -n force-systemd-env-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-992000 -n force-systemd-env-992000: exit status 6 (122.598911ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 04:02:59.786722    5091 status.go:415] kubeconfig endpoint: extract IP: "force-systemd-env-992000" does not appear in /Users/jenkins/minikube-integration/17297-1019/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-env-992000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-992000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-992000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-992000: (5.265553067s)
--- FAIL: TestForceSystemdEnv (20.68s)

                                                
                                    
x
+
TestMinikubeProfile (66.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-782000 --driver=hyperkit 
E0925 03:46:40.479915    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-782000 --driver=hyperkit : (35.357074078s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-784000 --driver=hyperkit 
E0925 03:47:21.441947    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p second-784000 --driver=hyperkit : exit status 90 (18.064634732s)

                                                
                                                
-- stdout --
	* [second-784000] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node second-784000 in cluster second-784000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-amd64 start -p second-784000 --driver=hyperkit ": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-25 03:47:25.922528 -0700 PDT m=+868.341147985
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p second-784000 -n second-784000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p second-784000 -n second-784000: exit status 6 (126.305997ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0925 03:47:26.039666    3530 status.go:415] kubeconfig endpoint: extract IP: "second-784000" does not appear in /Users/jenkins/minikube-integration/17297-1019/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-784000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-784000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-784000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-784000: (5.267702553s)
panic.go:523: *** TestMinikubeProfile FAILED at 2023-09-25 03:47:31.316918 -0700 PDT m=+873.735448563
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p first-782000 -n first-782000
helpers_test.go:244: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p first-782000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p first-782000 logs -n 25: (1.911862466s)
helpers_test.go:252: TestMinikubeProfile logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| Command |                   Args                   |           Profile           |   User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	| delete  | -p functional-220000                     | functional-220000           | jenkins  | v1.31.2 | 25 Sep 23 03:42 PDT | 25 Sep 23 03:42 PDT |
	| start   | -p image-287000                          | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:42 PDT | 25 Sep 23 03:43 PDT |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:43 PDT |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | -p image-287000                          |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:43 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                             |          |         |                     |                     |
	|         | --build-opt=no-cache                     |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                             |          |         |                     |                     |
	|         | image-287000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:43 PDT |
	|         | ./testdata/image-build/test-normal       |                             |          |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                             |          |         |                     |                     |
	|         | image-287000                             |                             |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:43 PDT |
	|         | -f inner/Dockerfile                      |                             |          |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                             |          |         |                     |                     |
	|         | -p image-287000                          |                             |          |         |                     |                     |
	| delete  | -p image-287000                          | image-287000                | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:43 PDT |
	| start   | -p ingress-addon-legacy-797000           | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:43 PDT | 25 Sep 23 03:44 PDT |
	|         | --kubernetes-version=v1.18.20            |                             |          |         |                     |                     |
	|         | --memory=4096 --wait=true                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-797000              | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:44 PDT | 25 Sep 23 03:44 PDT |
	|         | addons enable ingress                    |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-797000              | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:44 PDT | 25 Sep 23 03:44 PDT |
	|         | addons enable ingress-dns                |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=5                   |                             |          |         |                     |                     |
	| ssh     | ingress-addon-legacy-797000              | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:45 PDT |
	|         | ssh curl -s http://127.0.0.1/            |                             |          |         |                     |                     |
	|         | -H 'Host: nginx.example.com'             |                             |          |         |                     |                     |
	| ip      | ingress-addon-legacy-797000 ip           | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:45 PDT |
	| addons  | ingress-addon-legacy-797000              | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:45 PDT |
	|         | addons disable ingress-dns               |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| addons  | ingress-addon-legacy-797000              | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:45 PDT |
	|         | addons disable ingress                   |                             |          |         |                     |                     |
	|         | --alsologtostderr -v=1                   |                             |          |         |                     |                     |
	| delete  | -p ingress-addon-legacy-797000           | ingress-addon-legacy-797000 | jenkins  | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:45 PDT |
	| start   | -p json-output-886000                    | json-output-886000          | testUser | v1.31.2 | 25 Sep 23 03:45 PDT | 25 Sep 23 03:46 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	|         | --memory=2200 --wait=true                |                             |          |         |                     |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| pause   | -p json-output-886000                    | json-output-886000          | testUser | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:46 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| unpause | -p json-output-886000                    | json-output-886000          | testUser | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:46 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| stop    | -p json-output-886000                    | json-output-886000          | testUser | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:46 PDT |
	|         | --output=json --user=testUser            |                             |          |         |                     |                     |
	| delete  | -p json-output-886000                    | json-output-886000          | jenkins  | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:46 PDT |
	| start   | -p json-output-error-698000              | json-output-error-698000    | jenkins  | v1.31.2 | 25 Sep 23 03:46 PDT |                     |
	|         | --memory=2200 --output=json              |                             |          |         |                     |                     |
	|         | --wait=true --driver=fail                |                             |          |         |                     |                     |
	| delete  | -p json-output-error-698000              | json-output-error-698000    | jenkins  | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:46 PDT |
	| start   | -p first-782000                          | first-782000                | jenkins  | v1.31.2 | 25 Sep 23 03:46 PDT | 25 Sep 23 03:47 PDT |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| start   | -p second-784000                         | second-784000               | jenkins  | v1.31.2 | 25 Sep 23 03:47 PDT |                     |
	|         | --driver=hyperkit                        |                             |          |         |                     |                     |
	| delete  | -p second-784000                         | second-784000               | jenkins  | v1.31.2 | 25 Sep 23 03:47 PDT | 25 Sep 23 03:47 PDT |
	|---------|------------------------------------------|-----------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:47:07
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:47:07.894533    3510 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:47:07.895108    3510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:47:07.895112    3510 out.go:309] Setting ErrFile to fd 2...
	I0925 03:47:07.895115    3510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:47:07.895466    3510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 03:47:07.897126    3510 out.go:303] Setting JSON to false
	I0925 03:47:07.917660    3510 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1001,"bootTime":1695637826,"procs":446,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 03:47:07.917767    3510 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:47:07.954886    3510 out.go:177] * [second-784000] minikube v1.31.2 on Darwin 13.6
	I0925 03:47:08.066971    3510 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:47:08.029276    3510 notify.go:220] Checking for updates...
	I0925 03:47:08.140772    3510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 03:47:08.161864    3510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 03:47:08.182894    3510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:47:08.203977    3510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 03:47:08.224847    3510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:47:08.246439    3510 config.go:182] Loaded profile config "first-782000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:47:08.246553    3510 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:47:08.273799    3510 out.go:177] * Using the hyperkit driver based on user configuration
	I0925 03:47:08.315981    3510 start.go:298] selected driver: hyperkit
	I0925 03:47:08.315999    3510 start.go:902] validating driver "hyperkit" against <nil>
	I0925 03:47:08.316016    3510 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:47:08.316239    3510 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:47:08.316395    3510 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17297-1019/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0925 03:47:08.324126    3510 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0925 03:47:08.327579    3510 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:47:08.327594    3510 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0925 03:47:08.327623    3510 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:47:08.329945    3510 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0925 03:47:08.330090    3510 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 03:47:08.330113    3510 cni.go:84] Creating CNI manager for ""
	I0925 03:47:08.330126    3510 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:47:08.330139    3510 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0925 03:47:08.330145    3510 start_flags.go:321] config:
	{Name:second-784000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:second-784000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:47:08.330304    3510 iso.go:125] acquiring lock: {Name:mk5685b8103aa0f952a2e44c47bdd1882fdd0bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:47:08.372850    3510 out.go:177] * Starting control plane node second-784000 in cluster second-784000
	I0925 03:47:08.393921    3510 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:47:08.393981    3510 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 03:47:08.393997    3510 cache.go:57] Caching tarball of preloaded images
	I0925 03:47:08.394138    3510 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 03:47:08.394149    3510 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 03:47:08.394306    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/second-784000/config.json ...
	I0925 03:47:08.394338    3510 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/second-784000/config.json: {Name:mkc718cd656203e1cb0443a3a948ec90d2e64efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 03:47:08.394883    3510 start.go:365] acquiring machines lock for second-784000: {Name:mkc5a9c335a363bfa8f942e55cb9e7e0d08ada9f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 03:47:08.394957    3510 start.go:369] acquired machines lock for "second-784000" in 62.395µs
	I0925 03:47:08.394992    3510 start.go:93] Provisioning new machine with config: &{Name:second-784000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:second-784000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 03:47:08.395069    3510 start.go:125] createHost starting for "" (driver="hyperkit")
	I0925 03:47:08.437833    3510 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	I0925 03:47:08.438193    3510 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:47:08.438254    3510 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:47:08.446502    3510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50858
	I0925 03:47:08.446843    3510 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:47:08.447281    3510 main.go:141] libmachine: Using API Version  1
	I0925 03:47:08.447293    3510 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:47:08.447510    3510 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:47:08.447665    3510 main.go:141] libmachine: (second-784000) Calling .GetMachineName
	I0925 03:47:08.447775    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:08.447873    3510 start.go:159] libmachine.API.Create for "second-784000" (driver="hyperkit")
	I0925 03:47:08.447896    3510 client.go:168] LocalClient.Create starting
	I0925 03:47:08.447929    3510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem
	I0925 03:47:08.447965    3510 main.go:141] libmachine: Decoding PEM data...
	I0925 03:47:08.447979    3510 main.go:141] libmachine: Parsing certificate...
	I0925 03:47:08.448039    3510 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem
	I0925 03:47:08.448061    3510 main.go:141] libmachine: Decoding PEM data...
	I0925 03:47:08.448070    3510 main.go:141] libmachine: Parsing certificate...
	I0925 03:47:08.448082    3510 main.go:141] libmachine: Running pre-create checks...
	I0925 03:47:08.448090    3510 main.go:141] libmachine: (second-784000) Calling .PreCreateCheck
	I0925 03:47:08.448167    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:08.448372    3510 main.go:141] libmachine: (second-784000) Calling .GetConfigRaw
	I0925 03:47:08.448867    3510 main.go:141] libmachine: Creating machine...
	I0925 03:47:08.448872    3510 main.go:141] libmachine: (second-784000) Calling .Create
	I0925 03:47:08.448943    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:08.449070    3510 main.go:141] libmachine: (second-784000) DBG | I0925 03:47:08.448940    3518 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 03:47:08.449120    3510 main.go:141] libmachine: (second-784000) Downloading /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17297-1019/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I0925 03:47:08.643065    3510 main.go:141] libmachine: (second-784000) DBG | I0925 03:47:08.643004    3518 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/id_rsa...
	I0925 03:47:08.794380    3510 main.go:141] libmachine: (second-784000) DBG | I0925 03:47:08.794321    3518 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/second-784000.rawdisk...
	I0925 03:47:08.794392    3510 main.go:141] libmachine: (second-784000) DBG | Writing magic tar header
	I0925 03:47:08.794401    3510 main.go:141] libmachine: (second-784000) DBG | Writing SSH key tar header
	I0925 03:47:08.794896    3510 main.go:141] libmachine: (second-784000) DBG | I0925 03:47:08.794854    3518 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000 ...
	I0925 03:47:09.120195    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:09.120240    3510 main.go:141] libmachine: (second-784000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/hyperkit.pid
	I0925 03:47:09.120290    3510 main.go:141] libmachine: (second-784000) DBG | Using UUID dfde6d8e-5b90-11ee-b7ab-149d997fca88
	I0925 03:47:09.146994    3510 main.go:141] libmachine: (second-784000) DBG | Generated MAC 2e:f:4b:9f:7c:82
	I0925 03:47:09.147009    3510 main.go:141] libmachine: (second-784000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-784000
	I0925 03:47:09.147043    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"dfde6d8e-5b90-11ee-b7ab-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963f0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0925 03:47:09.147074    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"dfde6d8e-5b90-11ee-b7ab-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000963f0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0925 03:47:09.147132    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "dfde6d8e-5b90-11ee-b7ab-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/second-784000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/bzimage,/Users/jenkins/minikube-integration/17297-1019/.minikube/machine
s/second-784000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-784000"}
	I0925 03:47:09.147161    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U dfde6d8e-5b90-11ee-b7ab-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/second-784000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/console-ring -f kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/bzimage,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=second-784000"
	I0925 03:47:09.147175    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0925 03:47:09.149697    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 DEBUG: hyperkit: Pid is 3519
	I0925 03:47:09.150085    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 0
	I0925 03:47:09.150092    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:09.150146    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:09.151100    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:09.151158    3510 main.go:141] libmachine: (second-784000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0925 03:47:09.151176    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 03:47:09.151197    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 03:47:09.151208    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 03:47:09.151215    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 03:47:09.151224    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 03:47:09.151241    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 03:47:09.151250    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 03:47:09.156416    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0925 03:47:09.165212    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0925 03:47:09.165982    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 03:47:09.166005    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 03:47:09.166019    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 03:47:09.166030    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 03:47:09.734180    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0925 03:47:09.734189    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0925 03:47:09.839184    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 03:47:09.839195    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 03:47:09.839203    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 03:47:09.839213    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 03:47:09.840094    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0925 03:47:09.840101    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0925 03:47:11.153207    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 1
	I0925 03:47:11.153215    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:11.153294    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:11.154141    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:11.154198    3510 main.go:141] libmachine: (second-784000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0925 03:47:11.154208    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 03:47:11.154219    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 03:47:11.154227    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 03:47:11.154233    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 03:47:11.154240    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 03:47:11.154247    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 03:47:11.154254    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 03:47:13.155123    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 2
	I0925 03:47:13.155135    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:13.155147    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:13.155993    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:13.156033    3510 main.go:141] libmachine: (second-784000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0925 03:47:13.156039    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 03:47:13.156047    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 03:47:13.156061    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 03:47:13.156068    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 03:47:13.156073    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 03:47:13.156087    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 03:47:13.156103    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 03:47:14.760256    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0925 03:47:14.760339    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0925 03:47:14.760348    3510 main.go:141] libmachine: (second-784000) DBG | 2023/09/25 03:47:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0925 03:47:15.157120    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 3
	I0925 03:47:15.157131    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:15.157225    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:15.158074    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:15.158111    3510 main.go:141] libmachine: (second-784000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0925 03:47:15.158124    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 03:47:15.158140    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 03:47:15.158148    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 03:47:15.158154    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 03:47:15.158160    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 03:47:15.158166    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 03:47:15.158173    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 03:47:17.158413    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 4
	I0925 03:47:17.158426    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:17.158501    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:17.159323    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:17.159373    3510 main.go:141] libmachine: (second-784000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0925 03:47:17.159384    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:ba:aa:9:2:cb:33 ID:1,ba:aa:9:2:cb:33 Lease:0x6512b693}
	I0925 03:47:17.159394    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:e6:90:1:72:f5:16 ID:1,e6:90:1:72:f5:16 Lease:0x6512b64d}
	I0925 03:47:17.159399    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:5e:78:39:3d:9d:36 ID:1,5e:78:39:3d:9d:36 Lease:0x6512b5cb}
	I0925 03:47:17.159405    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:be:df:b7:7f:25:94 ID:1,be:df:b7:7f:25:94 Lease:0x6511643a}
	I0925 03:47:17.159410    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:ce:a6:ee:bb:f0:c0 ID:1,ce:a6:ee:bb:f0:c0 Lease:0x6512b4b0}
	I0925 03:47:17.159416    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:4e:b5:b0:41:db:a3 ID:1,4e:b5:b0:41:db:a3 Lease:0x65116325}
	I0925 03:47:17.159421    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:3a:9f:39:11:92:69 ID:1,3a:9f:39:11:92:69 Lease:0x6512b379}
	I0925 03:47:19.159632    3510 main.go:141] libmachine: (second-784000) DBG | Attempt 5
	I0925 03:47:19.159659    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:19.159728    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:19.160592    3510 main.go:141] libmachine: (second-784000) DBG | Searching for 2e:f:4b:9f:7c:82 in /var/db/dhcpd_leases ...
	I0925 03:47:19.160636    3510 main.go:141] libmachine: (second-784000) DBG | Found 8 entries in /var/db/dhcpd_leases!
	I0925 03:47:19.160647    3510 main.go:141] libmachine: (second-784000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:2e:f:4b:9f:7c:82 ID:1,2e:f:4b:9f:7c:82 Lease:0x6512b6b6}
	I0925 03:47:19.160654    3510 main.go:141] libmachine: (second-784000) DBG | Found match: 2e:f:4b:9f:7c:82
	I0925 03:47:19.160667    3510 main.go:141] libmachine: (second-784000) DBG | IP: 192.168.64.9
	I0925 03:47:19.160713    3510 main.go:141] libmachine: (second-784000) Calling .GetConfigRaw
	I0925 03:47:19.161199    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:19.161295    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:19.161386    3510 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0925 03:47:19.161391    3510 main.go:141] libmachine: (second-784000) Calling .GetState
	I0925 03:47:19.161466    3510 main.go:141] libmachine: (second-784000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:47:19.161525    3510 main.go:141] libmachine: (second-784000) DBG | hyperkit pid from json: 3519
	I0925 03:47:19.162360    3510 main.go:141] libmachine: Detecting operating system of created instance...
	I0925 03:47:19.162368    3510 main.go:141] libmachine: Waiting for SSH to be available...
	I0925 03:47:19.162371    3510 main.go:141] libmachine: Getting to WaitForSSH function...
	I0925 03:47:19.162377    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:19.162457    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:19.162530    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:19.162607    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:19.162681    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:19.162784    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:19.163066    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:19.163071    3510 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0925 03:47:19.185017    3510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0925 03:47:22.242296    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:47:22.242303    3510 main.go:141] libmachine: Detecting the provisioner...
	I0925 03:47:22.242314    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.242439    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.242523    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.242607    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.242686    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.242843    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.243095    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.243099    3510 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0925 03:47:22.302711    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0925 03:47:22.302772    3510 main.go:141] libmachine: found compatible host: buildroot
	I0925 03:47:22.302776    3510 main.go:141] libmachine: Provisioning with buildroot...
	I0925 03:47:22.302780    3510 main.go:141] libmachine: (second-784000) Calling .GetMachineName
	I0925 03:47:22.302909    3510 buildroot.go:166] provisioning hostname "second-784000"
	I0925 03:47:22.302916    3510 main.go:141] libmachine: (second-784000) Calling .GetMachineName
	I0925 03:47:22.302994    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.303058    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.303139    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.303215    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.303292    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.303430    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.303669    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.303675    3510 main.go:141] libmachine: About to run SSH command:
	sudo hostname second-784000 && echo "second-784000" | sudo tee /etc/hostname
	I0925 03:47:22.369439    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: second-784000
	
	I0925 03:47:22.369456    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.369582    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.369664    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.369740    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.369822    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.369949    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.370199    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.370208    3510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\ssecond-784000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 second-784000/g' /etc/hosts;
				else 
					echo '127.0.1.1 second-784000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 03:47:22.435625    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 03:47:22.435646    3510 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1019/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1019/.minikube}
	I0925 03:47:22.435663    3510 buildroot.go:174] setting up certificates
	I0925 03:47:22.435672    3510 provision.go:83] configureAuth start
	I0925 03:47:22.435680    3510 main.go:141] libmachine: (second-784000) Calling .GetMachineName
	I0925 03:47:22.435817    3510 main.go:141] libmachine: (second-784000) Calling .GetIP
	I0925 03:47:22.435904    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.435991    3510 provision.go:138] copyHostCerts
	I0925 03:47:22.436063    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem, removing ...
	I0925 03:47:22.436070    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem
	I0925 03:47:22.436187    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem (1078 bytes)
	I0925 03:47:22.436392    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem, removing ...
	I0925 03:47:22.436395    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem
	I0925 03:47:22.436455    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem (1123 bytes)
	I0925 03:47:22.436642    3510 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem, removing ...
	I0925 03:47:22.436645    3510 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem
	I0925 03:47:22.436707    3510 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem (1675 bytes)
	I0925 03:47:22.436838    3510 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem org=jenkins.second-784000 san=[192.168.64.9 192.168.64.9 localhost 127.0.0.1 minikube second-784000]
	I0925 03:47:22.565157    3510 provision.go:172] copyRemoteCerts
	I0925 03:47:22.565207    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 03:47:22.565224    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.565363    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.565520    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.565619    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.565713    3510 sshutil.go:53] new ssh client: &{IP:192.168.64.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/id_rsa Username:docker}
	I0925 03:47:22.601401    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 03:47:22.617475    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0925 03:47:22.633671    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 03:47:22.649427    3510 provision.go:86] duration metric: configureAuth took 213.740303ms
	I0925 03:47:22.649435    3510 buildroot.go:189] setting minikube options for container-runtime
	I0925 03:47:22.649561    3510 config.go:182] Loaded profile config "second-784000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:47:22.649571    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:22.649701    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.649788    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.649871    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.649967    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.650037    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.650131    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.650375    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.650380    3510 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 03:47:22.710983    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 03:47:22.710989    3510 buildroot.go:70] root file system type: tmpfs
	I0925 03:47:22.711075    3510 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 03:47:22.711090    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.711213    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.711286    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.711369    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.711452    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.711574    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.711819    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.711865    3510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 03:47:22.779930    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 03:47:22.779949    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:22.780090    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:22.780186    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.780275    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:22.780363    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:22.780489    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:22.780736    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:22.780746    3510 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 03:47:23.253941    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 03:47:23.253952    3510 main.go:141] libmachine: Checking connection to Docker...
	I0925 03:47:23.253957    3510 main.go:141] libmachine: (second-784000) Calling .GetURL
	I0925 03:47:23.254095    3510 main.go:141] libmachine: Docker is up and running!
	I0925 03:47:23.254106    3510 main.go:141] libmachine: Reticulating splines...
	I0925 03:47:23.254113    3510 client.go:171] LocalClient.Create took 14.805894224s
	I0925 03:47:23.254123    3510 start.go:167] duration metric: libmachine.API.Create for "second-784000" took 14.805934764s
	I0925 03:47:23.254129    3510 start.go:300] post-start starting for "second-784000" (driver="hyperkit")
	I0925 03:47:23.254135    3510 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 03:47:23.254143    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:23.254278    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 03:47:23.254289    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:23.254393    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:23.254473    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:23.254553    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:23.254648    3510 sshutil.go:53] new ssh client: &{IP:192.168.64.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/id_rsa Username:docker}
	I0925 03:47:23.289692    3510 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 03:47:23.292350    3510 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 03:47:23.292360    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/addons for local assets ...
	I0925 03:47:23.292439    3510 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/files for local assets ...
	I0925 03:47:23.292583    3510 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem -> 14872.pem in /etc/ssl/certs
	I0925 03:47:23.292739    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 03:47:23.298379    3510 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem --> /etc/ssl/certs/14872.pem (1708 bytes)
	I0925 03:47:23.314465    3510 start.go:303] post-start completed in 60.321374ms
	I0925 03:47:23.314487    3510 main.go:141] libmachine: (second-784000) Calling .GetConfigRaw
	I0925 03:47:23.315070    3510 main.go:141] libmachine: (second-784000) Calling .GetIP
	I0925 03:47:23.315199    3510 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/second-784000/config.json ...
	I0925 03:47:23.315486    3510 start.go:128] duration metric: createHost completed in 14.920089845s
	I0925 03:47:23.315499    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:23.315587    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:23.315663    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:23.315742    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:23.315814    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:23.315914    3510 main.go:141] libmachine: Using SSH client type: native
	I0925 03:47:23.316148    3510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.9 22 <nil> <nil>}
	I0925 03:47:23.316156    3510 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 03:47:23.376320    3510 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695638843.434872596
	
	I0925 03:47:23.376326    3510 fix.go:206] guest clock: 1695638843.434872596
	I0925 03:47:23.376330    3510 fix.go:219] Guest: 2023-09-25 03:47:23.434872596 -0700 PDT Remote: 2023-09-25 03:47:23.315492 -0700 PDT m=+15.450360512 (delta=119.380596ms)
	I0925 03:47:23.376345    3510 fix.go:190] guest clock delta is within tolerance: 119.380596ms
	I0925 03:47:23.376347    3510 start.go:83] releasing machines lock for "second-784000", held for 14.981065179s
	I0925 03:47:23.376364    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:23.376491    3510 main.go:141] libmachine: (second-784000) Calling .GetIP
	I0925 03:47:23.376582    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:23.376840    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:23.376935    3510 main.go:141] libmachine: (second-784000) Calling .DriverName
	I0925 03:47:23.377025    3510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 03:47:23.377045    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:23.377096    3510 ssh_runner.go:195] Run: cat /version.json
	I0925 03:47:23.377103    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHHostname
	I0925 03:47:23.377135    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:23.377198    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:23.377201    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHPort
	I0925 03:47:23.377305    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:23.377320    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHKeyPath
	I0925 03:47:23.377413    3510 sshutil.go:53] new ssh client: &{IP:192.168.64.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/id_rsa Username:docker}
	I0925 03:47:23.377427    3510 main.go:141] libmachine: (second-784000) Calling .GetSSHUsername
	I0925 03:47:23.377496    3510 sshutil.go:53] new ssh client: &{IP:192.168.64.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/second-784000/id_rsa Username:docker}
	I0925 03:47:23.409848    3510 ssh_runner.go:195] Run: systemctl --version
	I0925 03:47:23.454524    3510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 03:47:23.458791    3510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 03:47:23.458840    3510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 03:47:23.469857    3510 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 03:47:23.469867    3510 start.go:469] detecting cgroup driver to use...
	I0925 03:47:23.469967    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:47:23.481590    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 03:47:23.488724    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 03:47:23.495646    3510 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 03:47:23.495684    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 03:47:23.502647    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:47:23.509525    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 03:47:23.516476    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 03:47:23.523472    3510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 03:47:23.530676    3510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 03:47:23.537711    3510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 03:47:23.544030    3510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 03:47:23.550843    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:47:23.633100    3510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 03:47:23.645224    3510 start.go:469] detecting cgroup driver to use...
	I0925 03:47:23.645292    3510 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 03:47:23.655867    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:47:23.665834    3510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 03:47:23.681018    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 03:47:23.689754    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:47:23.698610    3510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 03:47:23.725942    3510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 03:47:23.735194    3510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 03:47:23.746890    3510 ssh_runner.go:195] Run: which cri-dockerd
	I0925 03:47:23.749345    3510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 03:47:23.755434    3510 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 03:47:23.766322    3510 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 03:47:23.853763    3510 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 03:47:23.936937    3510 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 03:47:23.936999    3510 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 03:47:23.948209    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:47:24.034878    3510 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 03:47:25.386028    3510 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.351113319s)
	I0925 03:47:25.386088    3510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:47:25.470347    3510 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 03:47:25.553898    3510 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 03:47:25.652575    3510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 03:47:25.741278    3510 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 03:47:25.774416    3510 out.go:177] 
	W0925 03:47:25.794388    3510 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0925 03:47:25.794402    3510 out.go:239] * 
	W0925 03:47:25.794997    3510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0925 03:47:25.860089    3510 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 10:46:42 UTC, ends at Mon 2023-09-25 10:47:32 UTC. --
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.174459093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 cri-dockerd[1060]: time="2023-09-25T10:47:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f2b921d312b5ff2d452cb5d0af821ccb49f9b5b21779b0539b6408b2834e438/resolv.conf as [nameserver 192.168.64.1]"
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.262669571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.262760674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.262783936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.262802705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.550499489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.550553137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.550570835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.550579053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 cri-dockerd[1060]: time="2023-09-25T10:47:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/808c456d58aaa0f63e985b645a4dea7eeb2ac4d4d0ec17369db9cfc0be239fe2/resolv.conf as [nameserver 192.168.64.1]"
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.845795003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.845859427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.845876665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:47:19 first-782000 dockerd[1176]: time="2023-09-25T10:47:19.845886145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.533212246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.533313090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.533325290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.533332553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:20 first-782000 cri-dockerd[1060]: time="2023-09-25T10:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9db29fb3bfb4d194b009e04aa43aab3a7935aad3d2b78e2ec22fbf983c958ad5/resolv.conf as [nameserver 192.168.64.1]"
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.949773595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.949864062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.949899359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 10:47:20 first-782000 dockerd[1176]: time="2023-09-25T10:47:20.949920113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 10:47:26 first-782000 cri-dockerd[1060]: time="2023-09-25T10:47:26Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1a16824f4be85       ead0a4a53df89       12 seconds ago      Running             coredns                   0                   9db29fb3bfb4d       coredns-5dd5756b68-9dds8
	9da198c76a1cc       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   808c456d58aaa       storage-provisioner
	7fcc0be93c5f3       c120fed2beb84       13 seconds ago      Running             kube-proxy                0                   0f2b921d312b5       kube-proxy-fsftk
	e26c076596053       7a5d9d67a13f6       32 seconds ago      Running             kube-scheduler            0                   aad23754491f0       kube-scheduler-first-782000
	7d346f1ae1bdb       73deb9a3f7025       32 seconds ago      Running             etcd                      0                   b618b77a733f6       etcd-first-782000
	6a1349a0cc400       55f13c92defb1       32 seconds ago      Running             kube-controller-manager   0                   6748a480a33de       kube-controller-manager-first-782000
	e0d638d581337       cdcab12b2dd16       32 seconds ago      Running             kube-apiserver            0                   855bb75ab564c       kube-apiserver-first-782000
	
	* 
	* ==> coredns [1a16824f4be8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 82b95b61957b89eeea31bdaf6987f010031330ef97d5f8469dbdaa80b119a5b0c9955b961009dd5b77ee3ada002b456836be781510516cbd9d015b1a704a24ea
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50633 - 22957 "HINFO IN 8782597761479240492.1029431988610163596. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004314682s
	
	* 
	* ==> describe nodes <==
	* Name:               first-782000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=first-782000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=first-782000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T03_47_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 10:47:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  first-782000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 25 Sep 2023 10:47:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 10:47:26 +0000   Mon, 25 Sep 2023 10:47:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 10:47:26 +0000   Mon, 25 Sep 2023 10:47:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 10:47:26 +0000   Mon, 25 Sep 2023 10:47:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 10:47:26 +0000   Mon, 25 Sep 2023 10:47:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.8
	  Hostname:    first-782000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925796Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7b2a65d44714b1391309173fe84010f
	  System UUID:                cab711ee-0000-0000-b59c-149d997fca88
	  Boot ID:                    ef7bbc64-67bc-4c0d-9b26-c9a7e8899a9b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-9dds8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     14s
	  kube-system                 etcd-first-782000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         28s
	  kube-system                 kube-apiserver-first-782000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-controller-manager-first-782000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                 kube-proxy-fsftk                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	  kube-system                 kube-scheduler-first-782000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 storage-provisioner                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13s   kube-proxy       
	  Normal  Starting                 26s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26s   kubelet          Node first-782000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s   kubelet          Node first-782000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s   kubelet          Node first-782000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24s   kubelet          Node first-782000 status is now: NodeReady
	  Normal  RegisteredNode           14s   node-controller  Node first-782000 event: Registered Node first-782000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +5.033311] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.009113] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.987427] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.037669] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.856047] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.208999] systemd-fstab-generator[538]: Ignoring "noauto" for root device
	[  +0.089769] systemd-fstab-generator[549]: Ignoring "noauto" for root device
	[  +0.651344] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +0.299123] systemd-fstab-generator[793]: Ignoring "noauto" for root device
	[  +0.086247] systemd-fstab-generator[804]: Ignoring "noauto" for root device
	[  +0.101310] systemd-fstab-generator[817]: Ignoring "noauto" for root device
	[  +1.536305] systemd-fstab-generator[975]: Ignoring "noauto" for root device
	[  +0.083905] systemd-fstab-generator[986]: Ignoring "noauto" for root device
	[  +0.085181] systemd-fstab-generator[997]: Ignoring "noauto" for root device
	[  +0.095917] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.111999] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +5.482207] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +1.779904] kauditd_printk_skb: 55 callbacks suppressed
	[  +4.112616] systemd-fstab-generator[1543]: Ignoring "noauto" for root device
	[Sep25 10:47] systemd-fstab-generator[2427]: Ignoring "noauto" for root device
	[ +13.405489] kauditd_printk_skb: 39 callbacks suppressed
	
	* 
	* ==> etcd [7d346f1ae1bd] <==
	* {"level":"info","ts":"2023-09-25T10:47:01.243993Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-25T10:47:01.244283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 switched to configuration voters=(16421958229572656565)"}
	{"level":"info","ts":"2023-09-25T10:47:01.244438Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"833f6edb69fdc0db","local-member-id":"e3e68400a5fb1db5","added-peer-id":"e3e68400a5fb1db5","added-peer-peer-urls":["https://192.168.64.8:2380"]}
	{"level":"info","ts":"2023-09-25T10:47:01.806034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-25T10:47:01.806108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-25T10:47:01.806238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 received MsgPreVoteResp from e3e68400a5fb1db5 at term 1"}
	{"level":"info","ts":"2023-09-25T10:47:01.806299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 became candidate at term 2"}
	{"level":"info","ts":"2023-09-25T10:47:01.806404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 received MsgVoteResp from e3e68400a5fb1db5 at term 2"}
	{"level":"info","ts":"2023-09-25T10:47:01.806451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e3e68400a5fb1db5 became leader at term 2"}
	{"level":"info","ts":"2023-09-25T10:47:01.806468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e3e68400a5fb1db5 elected leader e3e68400a5fb1db5 at term 2"}
	{"level":"info","ts":"2023-09-25T10:47:01.810997Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e3e68400a5fb1db5","local-member-attributes":"{Name:first-782000 ClientURLs:[https://192.168.64.8:2379]}","request-path":"/0/members/e3e68400a5fb1db5/attributes","cluster-id":"833f6edb69fdc0db","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-25T10:47:01.810908Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:47:01.811381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:47:01.812383Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-25T10:47:01.812507Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-25T10:47:01.81254Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-25T10:47:01.811523Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"833f6edb69fdc0db","local-member-id":"e3e68400a5fb1db5","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:47:01.812845Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:47:01.812935Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-25T10:47:01.811542Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-25T10:47:01.824975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.64.8:2379"}
	{"level":"info","ts":"2023-09-25T10:47:08.139134Z","caller":"traceutil/trace.go:171","msg":"trace[2035897247] linearizableReadLoop","detail":"{readStateIndex:294; appliedIndex:293; }","duration":"112.820359ms","start":"2023-09-25T10:47:08.026298Z","end":"2023-09-25T10:47:08.139118Z","steps":["trace[2035897247] 'read index received'  (duration: 75.423152ms)","trace[2035897247] 'applied index is now lower than readState.Index'  (duration: 37.396091ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-25T10:47:08.139511Z","caller":"traceutil/trace.go:171","msg":"trace[350997527] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"113.972981ms","start":"2023-09-25T10:47:08.025521Z","end":"2023-09-25T10:47:08.139494Z","steps":["trace[350997527] 'process raft request'  (duration: 113.483689ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-25T10:47:08.139483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.075715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-09-25T10:47:08.140025Z","caller":"traceutil/trace.go:171","msg":"trace[1005540572] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:283; }","duration":"113.770413ms","start":"2023-09-25T10:47:08.026244Z","end":"2023-09-25T10:47:08.140014Z","steps":["trace[1005540572] 'agreement among raft nodes before linearized reading'  (duration: 112.976281ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  10:47:32 up 0 min,  0 users,  load average: 0.60, 0.19, 0.07
	Linux first-782000 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e0d638d58133] <==
	* I0925 10:47:03.099843       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0925 10:47:03.099878       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0925 10:47:03.099888       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0925 10:47:03.099980       1 aggregator.go:166] initial CRD sync complete...
	I0925 10:47:03.100009       1 autoregister_controller.go:141] Starting autoregister controller
	I0925 10:47:03.100013       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0925 10:47:03.100017       1 cache.go:39] Caches are synced for autoregister controller
	I0925 10:47:03.100622       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0925 10:47:03.102050       1 controller.go:624] quota admission added evaluator for: namespaces
	I0925 10:47:03.122189       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0925 10:47:04.005146       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0925 10:47:04.008578       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0925 10:47:04.008607       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0925 10:47:04.325420       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 10:47:04.347896       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0925 10:47:04.398171       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0925 10:47:04.402113       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.64.8]
	I0925 10:47:04.402953       1 controller.go:624] quota admission added evaluator for: endpoints
	I0925 10:47:04.405488       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0925 10:47:05.080315       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0925 10:47:06.125254       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0925 10:47:06.134798       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0925 10:47:06.141203       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0925 10:47:18.355112       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0925 10:47:18.822239       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6a1349a0cc40] <==
	* I0925 10:47:18.379585       1 shared_informer.go:318] Caches are synced for TTL
	I0925 10:47:18.379628       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0925 10:47:18.381648       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0925 10:47:18.384945       1 shared_informer.go:318] Caches are synced for endpoint
	I0925 10:47:18.385127       1 shared_informer.go:318] Caches are synced for HPA
	I0925 10:47:18.385206       1 shared_informer.go:318] Caches are synced for PV protection
	I0925 10:47:18.386086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.444556ms"
	I0925 10:47:18.389072       1 shared_informer.go:318] Caches are synced for stateful set
	I0925 10:47:18.404871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.958955ms"
	I0925 10:47:18.413899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.800144ms"
	I0925 10:47:18.414017       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.028µs"
	I0925 10:47:18.428404       1 shared_informer.go:318] Caches are synced for crt configmap
	I0925 10:47:18.440239       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 10:47:18.459504       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0925 10:47:18.474589       1 shared_informer.go:318] Caches are synced for resource quota
	I0925 10:47:18.564976       1 shared_informer.go:318] Caches are synced for attach detach
	I0925 10:47:18.828554       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fsftk"
	I0925 10:47:18.890177       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 10:47:18.931844       1 shared_informer.go:318] Caches are synced for garbage collector
	I0925 10:47:18.932077       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0925 10:47:20.188070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.25µs"
	I0925 10:47:20.197892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.281µs"
	I0925 10:47:21.414413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.085µs"
	I0925 10:47:21.433679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.037624ms"
	I0925 10:47:21.434093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.431µs"
	
	* 
	* ==> kube-proxy [7fcc0be93c5f] <==
	* I0925 10:47:19.337437       1 server_others.go:69] "Using iptables proxy"
	I0925 10:47:19.343214       1 node.go:141] Successfully retrieved node IP: 192.168.64.8
	I0925 10:47:19.364883       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0925 10:47:19.364918       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0925 10:47:19.367182       1 server_others.go:152] "Using iptables Proxier"
	I0925 10:47:19.367222       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0925 10:47:19.367448       1 server.go:846] "Version info" version="v1.28.2"
	I0925 10:47:19.367479       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0925 10:47:19.368066       1 config.go:188] "Starting service config controller"
	I0925 10:47:19.368099       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0925 10:47:19.368112       1 config.go:97] "Starting endpoint slice config controller"
	I0925 10:47:19.368115       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0925 10:47:19.368173       1 config.go:315] "Starting node config controller"
	I0925 10:47:19.368217       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0925 10:47:19.472087       1 shared_informer.go:318] Caches are synced for node config
	I0925 10:47:19.472529       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0925 10:47:19.472576       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [e26c07659605] <==
	* W0925 10:47:03.081524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 10:47:03.081558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0925 10:47:03.081668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:47:03.081737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0925 10:47:03.081685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:47:03.082054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:47:03.081858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:47:03.082337       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:47:03.900127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 10:47:03.900165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0925 10:47:03.929317       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 10:47:03.929404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0925 10:47:04.026518       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 10:47:04.026556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0925 10:47:04.026645       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0925 10:47:04.026653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0925 10:47:04.140739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 10:47:04.140758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0925 10:47:04.175213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 10:47:04.175294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0925 10:47:04.200260       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0925 10:47:04.200559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0925 10:47:04.204708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 10:47:04.204777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0925 10:47:04.573757       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 10:46:42 UTC, ends at Mon 2023-09-25 10:47:33 UTC. --
	Sep 25 10:47:07 first-782000 kubelet[2446]: I0925 10:47:07.240405    2446 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 25 10:47:07 first-782000 kubelet[2446]: I0925 10:47:07.367736    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-first-782000" podStartSLOduration=3.367683048 podCreationTimestamp="2023-09-25 10:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:07.361341184 +0000 UTC m=+1.252337182" watchObservedRunningTime="2023-09-25 10:47:07.367683048 +0000 UTC m=+1.258679039"
	Sep 25 10:47:07 first-782000 kubelet[2446]: I0925 10:47:07.374294    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-first-782000" podStartSLOduration=3.374268376 podCreationTimestamp="2023-09-25 10:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:07.368069059 +0000 UTC m=+1.259065050" watchObservedRunningTime="2023-09-25 10:47:07.374268376 +0000 UTC m=+1.265264368"
	Sep 25 10:47:07 first-782000 kubelet[2446]: I0925 10:47:07.383479    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-first-782000" podStartSLOduration=3.383453212 podCreationTimestamp="2023-09-25 10:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:07.374746965 +0000 UTC m=+1.265742962" watchObservedRunningTime="2023-09-25 10:47:07.383453212 +0000 UTC m=+1.274449209"
	Sep 25 10:47:07 first-782000 kubelet[2446]: I0925 10:47:07.393396    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-first-782000" podStartSLOduration=1.3933691879999999 podCreationTimestamp="2023-09-25 10:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:07.384404569 +0000 UTC m=+1.275400565" watchObservedRunningTime="2023-09-25 10:47:07.393369188 +0000 UTC m=+1.284365177"
	Sep 25 10:47:08 first-782000 kubelet[2446]: I0925 10:47:08.021208    2446 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.553787    2446 topology_manager.go:215] "Topology Admit Handler" podUID="c36d9e5b-55b7-403f-a8ce-0b027507d239" podNamespace="kube-system" podName="storage-provisioner"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.633786    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c36d9e5b-55b7-403f-a8ce-0b027507d239-tmp\") pod \"storage-provisioner\" (UID: \"c36d9e5b-55b7-403f-a8ce-0b027507d239\") " pod="kube-system/storage-provisioner"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.633959    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt9tj\" (UniqueName: \"kubernetes.io/projected/c36d9e5b-55b7-403f-a8ce-0b027507d239-kube-api-access-bt9tj\") pod \"storage-provisioner\" (UID: \"c36d9e5b-55b7-403f-a8ce-0b027507d239\") " pod="kube-system/storage-provisioner"
	Sep 25 10:47:18 first-782000 kubelet[2446]: E0925 10:47:18.738939    2446 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 25 10:47:18 first-782000 kubelet[2446]: E0925 10:47:18.739048    2446 projected.go:198] Error preparing data for projected volume kube-api-access-bt9tj for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 25 10:47:18 first-782000 kubelet[2446]: E0925 10:47:18.739127    2446 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c36d9e5b-55b7-403f-a8ce-0b027507d239-kube-api-access-bt9tj podName:c36d9e5b-55b7-403f-a8ce-0b027507d239 nodeName:}" failed. No retries permitted until 2023-09-25 10:47:19.239112495 +0000 UTC m=+13.130108484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bt9tj" (UniqueName: "kubernetes.io/projected/c36d9e5b-55b7-403f-a8ce-0b027507d239-kube-api-access-bt9tj") pod "storage-provisioner" (UID: "c36d9e5b-55b7-403f-a8ce-0b027507d239") : configmap "kube-root-ca.crt" not found
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.831829    2446 topology_manager.go:215] "Topology Admit Handler" podUID="33f325ea-75ae-41f4-ae95-4b92497c9038" podNamespace="kube-system" podName="kube-proxy-fsftk"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.935554    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33f325ea-75ae-41f4-ae95-4b92497c9038-lib-modules\") pod \"kube-proxy-fsftk\" (UID: \"33f325ea-75ae-41f4-ae95-4b92497c9038\") " pod="kube-system/kube-proxy-fsftk"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.935612    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33f325ea-75ae-41f4-ae95-4b92497c9038-xtables-lock\") pod \"kube-proxy-fsftk\" (UID: \"33f325ea-75ae-41f4-ae95-4b92497c9038\") " pod="kube-system/kube-proxy-fsftk"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.935635    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcj27\" (UniqueName: \"kubernetes.io/projected/33f325ea-75ae-41f4-ae95-4b92497c9038-kube-api-access-jcj27\") pod \"kube-proxy-fsftk\" (UID: \"33f325ea-75ae-41f4-ae95-4b92497c9038\") " pod="kube-system/kube-proxy-fsftk"
	Sep 25 10:47:18 first-782000 kubelet[2446]: I0925 10:47:18.935655    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33f325ea-75ae-41f4-ae95-4b92497c9038-kube-proxy\") pod \"kube-proxy-fsftk\" (UID: \"33f325ea-75ae-41f4-ae95-4b92497c9038\") " pod="kube-system/kube-proxy-fsftk"
	Sep 25 10:47:20 first-782000 kubelet[2446]: I0925 10:47:20.187615    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fsftk" podStartSLOduration=2.187540486 podCreationTimestamp="2023-09-25 10:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:19.392316938 +0000 UTC m=+13.283312929" watchObservedRunningTime="2023-09-25 10:47:20.187540486 +0000 UTC m=+14.078536476"
	Sep 25 10:47:20 first-782000 kubelet[2446]: I0925 10:47:20.188024    2446 topology_manager.go:215] "Topology Admit Handler" podUID="20a762f9-8804-410a-b8c2-d6340f031479" podNamespace="kube-system" podName="coredns-5dd5756b68-9dds8"
	Sep 25 10:47:20 first-782000 kubelet[2446]: I0925 10:47:20.246600    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20a762f9-8804-410a-b8c2-d6340f031479-config-volume\") pod \"coredns-5dd5756b68-9dds8\" (UID: \"20a762f9-8804-410a-b8c2-d6340f031479\") " pod="kube-system/coredns-5dd5756b68-9dds8"
	Sep 25 10:47:20 first-782000 kubelet[2446]: I0925 10:47:20.246819    2446 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9nl\" (UniqueName: \"kubernetes.io/projected/20a762f9-8804-410a-b8c2-d6340f031479-kube-api-access-dc9nl\") pod \"coredns-5dd5756b68-9dds8\" (UID: \"20a762f9-8804-410a-b8c2-d6340f031479\") " pod="kube-system/coredns-5dd5756b68-9dds8"
	Sep 25 10:47:20 first-782000 kubelet[2446]: I0925 10:47:20.819576    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.819550562 podCreationTimestamp="2023-09-25 10:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:20.402071167 +0000 UTC m=+14.293067163" watchObservedRunningTime="2023-09-25 10:47:20.819550562 +0000 UTC m=+14.710546553"
	Sep 25 10:47:21 first-782000 kubelet[2446]: I0925 10:47:21.415918    2446 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9dds8" podStartSLOduration=3.415854151 podCreationTimestamp="2023-09-25 10:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-25 10:47:21.415546708 +0000 UTC m=+15.306542705" watchObservedRunningTime="2023-09-25 10:47:21.415854151 +0000 UTC m=+15.306850142"
	Sep 25 10:47:26 first-782000 kubelet[2446]: I0925 10:47:26.981225    2446 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 25 10:47:26 first-782000 kubelet[2446]: I0925 10:47:26.982608    2446 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [9da198c76a1c] <==
	* I0925 10:47:19.907243       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 10:47:19.912876       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 10:47:19.913576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 10:47:19.918474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 10:47:19.918638       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_first-782000_405272d0-8122-479f-97b5-09c0b46aeeb3!
	I0925 10:47:19.920092       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94bda7ba-dd1c-4fc6-a006-0186c60c2d0c", APIVersion:"v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' first-782000_405272d0-8122-479f-97b5-09c0b46aeeb3 became leader
	I0925 10:47:20.019963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_first-782000_405272d0-8122-479f-97b5-09c0b46aeeb3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p first-782000 -n first-782000
helpers_test.go:261: (dbg) Run:  kubectl --context first-782000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMinikubeProfile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "first-782000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-782000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-782000: (5.245470878s)
--- FAIL: TestMinikubeProfile (66.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-596000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-596000 "sudo crictl images -o json": exit status 1 (143.719586ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-596000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-596000 -n old-k8s-version-596000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-596000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-596000 logs -n 25: (2.078200561s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p false-803000 sudo systemctl                         | false-803000           | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT |                     |
	|         | status crio --all --full                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-803000 sudo systemctl                         | false-803000           | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT | 25 Sep 23 04:19 PDT |
	|         | cat crio --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-803000 sudo find                              | false-803000           | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT | 25 Sep 23 04:19 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-803000 sudo crio                              | false-803000           | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT | 25 Sep 23 04:19 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p false-803000                                        | false-803000           | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT | 25 Sep 23 04:19 PDT |
	| start   | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:19 PDT | 25 Sep 23 04:20 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-596000        | old-k8s-version-596000 | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-596000                              | old-k8s-version-596000 | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-596000             | old-k8s-version-596000 | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-596000                              | old-k8s-version-596000 | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:28 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-821000             | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-821000                  | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:20 PDT | 25 Sep 23 04:20 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:21 PDT | 25 Sep 23 04:25 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-821000 sudo                              | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	| delete  | -p no-preload-821000                                   | no-preload-821000      | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:26 PDT |
	| start   | -p embed-certs-952000                                  | embed-certs-952000     | jenkins | v1.31.2 | 25 Sep 23 04:26 PDT | 25 Sep 23 04:27 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-952000            | embed-certs-952000     | jenkins | v1.31.2 | 25 Sep 23 04:27 PDT | 25 Sep 23 04:27 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-952000                                  | embed-certs-952000     | jenkins | v1.31.2 | 25 Sep 23 04:27 PDT | 25 Sep 23 04:28 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-952000                 | embed-certs-952000     | jenkins | v1.31.2 | 25 Sep 23 04:28 PDT | 25 Sep 23 04:28 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-952000                                  | embed-certs-952000     | jenkins | v1.31.2 | 25 Sep 23 04:28 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-596000 sudo                         | old-k8s-version-596000 | jenkins | v1.31.2 | 25 Sep 23 04:28 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 04:28:03
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 04:28:03.276312    9153 out.go:296] Setting OutFile to fd 1 ...
	I0925 04:28:03.276556    9153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:28:03.276561    9153 out.go:309] Setting ErrFile to fd 2...
	I0925 04:28:03.276565    9153 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 04:28:03.276745    9153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 04:28:03.278080    9153 out.go:303] Setting JSON to false
	I0925 04:28:03.297498    9153 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3457,"bootTime":1695637826,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 04:28:03.297589    9153 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 04:28:03.320015    9153 out.go:177] * [embed-certs-952000] minikube v1.31.2 on Darwin 13.6
	I0925 04:28:03.363258    9153 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 04:28:03.363329    9153 notify.go:220] Checking for updates...
	I0925 04:28:03.406012    9153 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 04:28:03.427103    9153 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 04:28:03.448270    9153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 04:28:03.469343    9153 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 04:28:03.490173    9153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 04:28:03.512021    9153 config.go:182] Loaded profile config "embed-certs-952000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:28:03.512746    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:03.512828    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:03.520861    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56460
	I0925 04:28:03.521220    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:03.521655    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:03.521669    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:03.521920    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:03.522057    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:03.522250    9153 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 04:28:03.522486    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:03.522529    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:03.529639    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56462
	I0925 04:28:03.529961    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:03.530339    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:03.530356    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:03.530564    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:03.530675    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:03.558066    9153 out.go:177] * Using the hyperkit driver based on existing profile
	I0925 04:28:03.600239    9153 start.go:298] selected driver: hyperkit
	I0925 04:28:03.600266    9153 start.go:902] validating driver "hyperkit" against &{Name:embed-certs-952000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-952000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:28:03.600467    9153 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 04:28:03.604438    9153 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:28:03.604529    9153 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17297-1019/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0925 04:28:03.611398    9153 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0925 04:28:03.614883    9153 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:03.614901    9153 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0925 04:28:03.615021    9153 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0925 04:28:03.615048    9153 cni.go:84] Creating CNI manager for ""
	I0925 04:28:03.615060    9153 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:28:03.615073    9153 start_flags.go:321] config:
	{Name:embed-certs-952000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:embed-certs-952000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:28:03.615207    9153 iso.go:125] acquiring lock: {Name:mk5685b8103aa0f952a2e44c47bdd1882fdd0bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 04:28:03.657080    9153 out.go:177] * Starting control plane node embed-certs-952000 in cluster embed-certs-952000
	I0925 04:28:03.677962    9153 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:28:03.678018    9153 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 04:28:03.678058    9153 cache.go:57] Caching tarball of preloaded images
	I0925 04:28:03.678163    9153 preload.go:174] Found /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0925 04:28:03.678173    9153 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I0925 04:28:03.678261    9153 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/config.json ...
	I0925 04:28:03.678622    9153 start.go:365] acquiring machines lock for embed-certs-952000: {Name:mkc5a9c335a363bfa8f942e55cb9e7e0d08ada9f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0925 04:28:03.678672    9153 start.go:369] acquired machines lock for "embed-certs-952000" in 38.169µs
	I0925 04:28:03.678691    9153 start.go:96] Skipping create...Using existing machine configuration
	I0925 04:28:03.678710    9153 fix.go:54] fixHost starting: 
	I0925 04:28:03.678926    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:03.678948    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:03.686221    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56464
	I0925 04:28:03.686543    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:03.686940    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:03.686956    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:03.687191    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:03.687319    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:03.687414    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:03.687510    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:03.687568    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9084
	I0925 04:28:03.688538    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid 9084 missing from process table
	I0925 04:28:03.688568    9153 fix.go:102] recreateIfNeeded on embed-certs-952000: state=Stopped err=<nil>
	I0925 04:28:03.688586    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	W0925 04:28:03.688663    9153 fix.go:128] unexpected machine state, will restart: <nil>
	I0925 04:28:03.747375    9153 out.go:177] * Restarting existing hyperkit VM for "embed-certs-952000" ...
	I0925 04:28:04.442023    8765 system_pods.go:86] 4 kube-system pods found
	I0925 04:28:04.442037    8765 system_pods.go:89] "coredns-5644d7b6d9-fhm6m" [261b0b1b-bbe6-420e-af7f-6e7cab5f7c35] Running
	I0925 04:28:04.442041    8765 system_pods.go:89] "kube-proxy-9jcsn" [1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7] Running
	I0925 04:28:04.442046    8765 system_pods.go:89] "metrics-server-74d5856cc6-8mcq6" [e3f5ea81-2285-4c21-9779-49cd559184dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:04.442051    8765 system_pods.go:89] "storage-provisioner" [5b3475c5-65f3-44a9-8561-bf6c52a49a9e] Running
	I0925 04:28:04.442060    8765 retry.go:31] will retry after 5.989730828s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0925 04:28:03.770293    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Start
	I0925 04:28:03.770507    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:03.770542    9153 main.go:141] libmachine: (embed-certs-952000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/hyperkit.pid
	I0925 04:28:03.771718    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid 9084 missing from process table
	I0925 04:28:03.771736    9153 main.go:141] libmachine: (embed-certs-952000) DBG | pid 9084 is in state "Stopped"
	I0925 04:28:03.771748    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/hyperkit.pid...
	I0925 04:28:03.771867    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Using UUID 586bb914-5b96-11ee-be5f-149d997fca88
	I0925 04:28:03.800542    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Generated MAC 26:bc:14:7f:ef:ed
	I0925 04:28:03.800568    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-952000
	I0925 04:28:03.800715    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586bb914-5b96-11ee-be5f-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00040a600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0925 04:28:03.800745    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"586bb914-5b96-11ee-be5f-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00040a600)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0925 04:28:03.800797    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "586bb914-5b96-11ee-be5f-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/embed-certs-952000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/bzimage,/Users/jenkins/minikube-
integration/17297-1019/.minikube/machines/embed-certs-952000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-952000"}
	I0925 04:28:03.800848    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 586bb914-5b96-11ee-be5f-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/embed-certs-952000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/tty,log=/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/console-ring -f kexec,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/bzimage,/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/i
nitrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-952000"
	I0925 04:28:03.800861    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0925 04:28:03.802157    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 DEBUG: hyperkit: Pid is 9164
	I0925 04:28:03.802524    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Attempt 0
	I0925 04:28:03.802535    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:03.802612    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:03.804117    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Searching for 26:bc:14:7f:ef:ed in /var/db/dhcpd_leases ...
	I0925 04:28:03.804174    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Found 41 entries in /var/db/dhcpd_leases!
	I0925 04:28:03.804193    9153 main.go:141] libmachine: (embed-certs-952000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.42 HWAddress:26:bc:14:7f:ef:ed ID:1,26:bc:14:7f:ef:ed Lease:0x6512bfe2}
	I0925 04:28:03.804235    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Found match: 26:bc:14:7f:ef:ed
	I0925 04:28:03.804258    9153 main.go:141] libmachine: (embed-certs-952000) DBG | IP: 192.168.64.42
	I0925 04:28:03.804285    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetConfigRaw
	I0925 04:28:03.804844    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetIP
	I0925 04:28:03.805025    9153 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/config.json ...
	I0925 04:28:03.805437    9153 machine.go:88] provisioning docker machine ...
	I0925 04:28:03.805451    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:03.805554    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetMachineName
	I0925 04:28:03.805649    9153 buildroot.go:166] provisioning hostname "embed-certs-952000"
	I0925 04:28:03.805658    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetMachineName
	I0925 04:28:03.805735    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:03.805809    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:03.805894    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:03.805973    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:03.806065    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:03.806182    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:03.806472    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:03.806482    9153 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-952000 && echo "embed-certs-952000" | sudo tee /etc/hostname
	I0925 04:28:03.808566    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0925 04:28:03.816781    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0925 04:28:03.817658    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 04:28:03.817685    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 04:28:03.817695    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 04:28:03.817711    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 04:28:04.183387    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0925 04:28:04.183404    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0925 04:28:04.287460    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0925 04:28:04.287481    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0925 04:28:04.287510    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0925 04:28:04.287525    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0925 04:28:04.288333    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0925 04:28:04.288347    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0925 04:28:10.435961    8765 system_pods.go:86] 4 kube-system pods found
	I0925 04:28:10.435974    8765 system_pods.go:89] "coredns-5644d7b6d9-fhm6m" [261b0b1b-bbe6-420e-af7f-6e7cab5f7c35] Running
	I0925 04:28:10.435979    8765 system_pods.go:89] "kube-proxy-9jcsn" [1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7] Running
	I0925 04:28:10.435992    8765 system_pods.go:89] "metrics-server-74d5856cc6-8mcq6" [e3f5ea81-2285-4c21-9779-49cd559184dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:10.435998    8765 system_pods.go:89] "storage-provisioner" [5b3475c5-65f3-44a9-8561-bf6c52a49a9e] Running
	I0925 04:28:10.436009    8765 retry.go:31] will retry after 9.143725034s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0925 04:28:09.153307    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:09 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0925 04:28:09.153356    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:09 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0925 04:28:09.153388    9153 main.go:141] libmachine: (embed-certs-952000) DBG | 2023/09/25 04:28:09 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0925 04:28:16.990204    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-952000
	
	I0925 04:28:16.990223    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:16.990367    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:16.990454    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:16.990547    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:16.990655    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:16.990789    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:16.991042    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:16.991057    9153 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-952000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-952000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-952000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0925 04:28:17.059059    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0925 04:28:17.059077    9153 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17297-1019/.minikube CaCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17297-1019/.minikube}
	I0925 04:28:17.059092    9153 buildroot.go:174] setting up certificates
	I0925 04:28:17.059102    9153 provision.go:83] configureAuth start
	I0925 04:28:17.059111    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetMachineName
	I0925 04:28:17.059262    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetIP
	I0925 04:28:17.059351    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.059435    9153 provision.go:138] copyHostCerts
	I0925 04:28:17.059528    9153 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem, removing ...
	I0925 04:28:17.059539    9153 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem
	I0925 04:28:17.059659    9153 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/cert.pem (1123 bytes)
	I0925 04:28:17.059897    9153 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem, removing ...
	I0925 04:28:17.059904    9153 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem
	I0925 04:28:17.059969    9153 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/key.pem (1675 bytes)
	I0925 04:28:17.060118    9153 exec_runner.go:144] found /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem, removing ...
	I0925 04:28:17.060130    9153 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem
	I0925 04:28:17.060198    9153 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.pem (1078 bytes)
	I0925 04:28:17.060325    9153 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem org=jenkins.embed-certs-952000 san=[192.168.64.42 192.168.64.42 localhost 127.0.0.1 minikube embed-certs-952000]
	I0925 04:28:17.184037    9153 provision.go:172] copyRemoteCerts
	I0925 04:28:17.184102    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0925 04:28:17.184124    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.184269    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:17.184363    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.184447    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:17.184528    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:17.222392    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0925 04:28:17.239209    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0925 04:28:17.256157    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0925 04:28:17.273993    9153 provision.go:86] duration metric: configureAuth took 214.876189ms
	I0925 04:28:17.274007    9153 buildroot.go:189] setting minikube options for container-runtime
	I0925 04:28:17.274144    9153 config.go:182] Loaded profile config "embed-certs-952000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:28:17.274181    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:17.274316    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.274409    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:17.274502    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.274586    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.274666    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:17.274771    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:17.274999    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:17.275007    9153 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0925 04:28:17.335948    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0925 04:28:17.335962    9153 buildroot.go:70] root file system type: tmpfs
	I0925 04:28:17.336031    9153 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0925 04:28:17.336050    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.336178    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:17.336267    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.336354    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.336448    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:17.336573    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:17.336818    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:17.336868    9153 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0925 04:28:17.405745    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0925 04:28:17.405772    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.405898    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:17.406007    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.406100    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.406192    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:17.406330    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:17.406577    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:17.406590    9153 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0925 04:28:17.990853    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0925 04:28:17.990868    9153 machine.go:91] provisioned docker machine in 14.185375494s
	I0925 04:28:17.990881    9153 start.go:300] post-start starting for "embed-certs-952000" (driver="hyperkit")
	I0925 04:28:17.990890    9153 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0925 04:28:17.990902    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:17.991096    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0925 04:28:17.991111    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:17.991200    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:17.991277    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:17.991371    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:17.991469    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:18.028325    9153 ssh_runner.go:195] Run: cat /etc/os-release
	I0925 04:28:18.031209    9153 info.go:137] Remote host: Buildroot 2021.02.12
	I0925 04:28:18.031223    9153 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/addons for local assets ...
	I0925 04:28:18.031306    9153 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17297-1019/.minikube/files for local assets ...
	I0925 04:28:18.031455    9153 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem -> 14872.pem in /etc/ssl/certs
	I0925 04:28:18.031623    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0925 04:28:18.037350    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem --> /etc/ssl/certs/14872.pem (1708 bytes)
	I0925 04:28:18.053321    9153 start.go:303] post-start completed in 62.431352ms
	I0925 04:28:18.053336    9153 fix.go:56] fixHost completed within 14.374591756s
	I0925 04:28:18.053350    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:18.053488    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:18.053585    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:18.053676    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:18.053766    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:18.053877    9153 main.go:141] libmachine: Using SSH client type: native
	I0925 04:28:18.054111    9153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.64.42 22 <nil> <nil>}
	I0925 04:28:18.054119    9153 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0925 04:28:18.115607    9153 main.go:141] libmachine: SSH cmd err, output: <nil>: 1695641298.082157995
	
	I0925 04:28:18.115619    9153 fix.go:206] guest clock: 1695641298.082157995
	I0925 04:28:18.115625    9153 fix.go:219] Guest: 2023-09-25 04:28:18.082157995 -0700 PDT Remote: 2023-09-25 04:28:18.05334 -0700 PDT m=+14.807503251 (delta=28.817995ms)
	I0925 04:28:18.115643    9153 fix.go:190] guest clock delta is within tolerance: 28.817995ms
	I0925 04:28:18.115646    9153 start.go:83] releasing machines lock for "embed-certs-952000", held for 14.43692058s
	I0925 04:28:18.115663    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:18.115787    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetIP
	I0925 04:28:18.115887    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:18.116157    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:18.116250    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:18.116318    9153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0925 04:28:18.116355    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:18.116381    9153 ssh_runner.go:195] Run: cat /version.json
	I0925 04:28:18.116394    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:18.116483    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:18.116497    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:18.116576    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:18.116589    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:18.116671    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:18.116699    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:18.116782    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:18.116802    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:18.193733    9153 ssh_runner.go:195] Run: systemctl --version
	I0925 04:28:18.198084    9153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0925 04:28:18.201709    9153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0925 04:28:18.201755    9153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0925 04:28:18.212149    9153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0925 04:28:18.212164    9153 start.go:469] detecting cgroup driver to use...
	I0925 04:28:18.212260    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:28:18.224034    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0925 04:28:18.230984    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0925 04:28:18.238095    9153 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0925 04:28:18.238153    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0925 04:28:18.244999    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:28:18.251905    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0925 04:28:18.258768    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0925 04:28:18.265618    9153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0925 04:28:18.272660    9153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0925 04:28:18.279638    9153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0925 04:28:18.307373    9153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0925 04:28:18.313709    9153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:28:18.393710    9153 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0925 04:28:18.405850    9153 start.go:469] detecting cgroup driver to use...
	I0925 04:28:18.405920    9153 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0925 04:28:18.415572    9153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:28:18.424703    9153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0925 04:28:18.438300    9153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0925 04:28:18.446864    9153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:28:18.455281    9153 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0925 04:28:18.476046    9153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0925 04:28:18.485473    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0925 04:28:18.497843    9153 ssh_runner.go:195] Run: which cri-dockerd
	I0925 04:28:18.500152    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0925 04:28:18.505655    9153 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0925 04:28:18.516570    9153 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0925 04:28:18.597931    9153 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0925 04:28:18.681875    9153 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I0925 04:28:18.681945    9153 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0925 04:28:18.692285    9153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:28:18.773274    9153 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0925 04:28:20.031175    9153 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.257876512s)
	I0925 04:28:20.031242    9153 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:28:20.117343    9153 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0925 04:28:20.202805    9153 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0925 04:28:20.290526    9153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:28:20.375777    9153 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0925 04:28:20.387920    9153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0925 04:28:20.471927    9153 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0925 04:28:20.525424    9153 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0925 04:28:20.525512    9153 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0925 04:28:20.529352    9153 start.go:537] Will wait 60s for crictl version
	I0925 04:28:20.529411    9153 ssh_runner.go:195] Run: which crictl
	I0925 04:28:20.531997    9153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0925 04:28:20.570280    9153 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I0925 04:28:20.570353    9153 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:28:20.588295    9153 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0925 04:28:19.585045    8765 system_pods.go:86] 5 kube-system pods found
	I0925 04:28:19.585077    8765 system_pods.go:89] "coredns-5644d7b6d9-fhm6m" [261b0b1b-bbe6-420e-af7f-6e7cab5f7c35] Running
	I0925 04:28:19.585082    8765 system_pods.go:89] "kube-proxy-9jcsn" [1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7] Running
	I0925 04:28:19.585087    8765 system_pods.go:89] "kube-scheduler-old-k8s-version-596000" [29492770-13f3-480b-b887-d880e8df61a6] Pending
	I0925 04:28:19.585092    8765 system_pods.go:89] "metrics-server-74d5856cc6-8mcq6" [e3f5ea81-2285-4c21-9779-49cd559184dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:19.585096    8765 system_pods.go:89] "storage-provisioner" [5b3475c5-65f3-44a9-8561-bf6c52a49a9e] Running
	I0925 04:28:19.585104    8765 retry.go:31] will retry after 12.164129069s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0925 04:28:20.628298    9153 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I0925 04:28:20.628347    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetIP
	I0925 04:28:20.628740    9153 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0925 04:28:20.632714    9153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:28:20.641498    9153 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 04:28:20.641559    9153 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:28:20.655062    9153 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 04:28:20.655084    9153 docker.go:594] Images already preloaded, skipping extraction
	I0925 04:28:20.655162    9153 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0925 04:28:20.668218    9153 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0925 04:28:20.668240    9153 cache_images.go:84] Images are preloaded, skipping loading
	I0925 04:28:20.668314    9153 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0925 04:28:20.685930    9153 cni.go:84] Creating CNI manager for ""
	I0925 04:28:20.685944    9153 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:28:20.685956    9153 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0925 04:28:20.685972    9153 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.42 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-952000 NodeName:embed-certs-952000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0925 04:28:20.686089    9153 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-952000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0925 04:28:20.686139    9153 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-952000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:embed-certs-952000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0925 04:28:20.686197    9153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0925 04:28:20.692218    9153 binaries.go:44] Found k8s binaries, skipping transfer
	I0925 04:28:20.692265    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0925 04:28:20.698086    9153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0925 04:28:20.709246    9153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0925 04:28:20.720129    9153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0925 04:28:20.731173    9153 ssh_runner.go:195] Run: grep 192.168.64.42	control-plane.minikube.internal$ /etc/hosts
	I0925 04:28:20.733472    9153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0925 04:28:20.741479    9153 certs.go:56] Setting up /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000 for IP: 192.168.64.42
	I0925 04:28:20.741501    9153 certs.go:190] acquiring lock for shared ca certs: {Name:mk3676345378806ecb3fbe1837a9e59a7bfdf67f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:28:20.741643    9153 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.key
	I0925 04:28:20.741693    9153 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17297-1019/.minikube/proxy-client-ca.key
	I0925 04:28:20.741781    9153 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/client.key
	I0925 04:28:20.741846    9153 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/apiserver.key.7a4a9a43
	I0925 04:28:20.741897    9153 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/proxy-client.key
	I0925 04:28:20.742095    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/1487.pem (1338 bytes)
	W0925 04:28:20.742136    9153 certs.go:433] ignoring /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/1487_empty.pem, impossibly tiny 0 bytes
	I0925 04:28:20.742152    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca-key.pem (1679 bytes)
	I0925 04:28:20.742183    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/ca.pem (1078 bytes)
	I0925 04:28:20.742213    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/cert.pem (1123 bytes)
	I0925 04:28:20.742244    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/certs/key.pem (1675 bytes)
	I0925 04:28:20.742312    9153 certs.go:437] found cert: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem (1708 bytes)
	I0925 04:28:20.742773    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0925 04:28:20.758908    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0925 04:28:20.774914    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0925 04:28:20.790773    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/embed-certs-952000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0925 04:28:20.806577    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0925 04:28:20.822668    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0925 04:28:20.839134    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0925 04:28:20.854958    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0925 04:28:20.871268    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/ssl/certs/14872.pem --> /usr/share/ca-certificates/14872.pem (1708 bytes)
	I0925 04:28:20.887087    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0925 04:28:20.902819    9153 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17297-1019/.minikube/certs/1487.pem --> /usr/share/ca-certificates/1487.pem (1338 bytes)
	I0925 04:28:20.918633    9153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0925 04:28:20.929740    9153 ssh_runner.go:195] Run: openssl version
	I0925 04:28:20.933235    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14872.pem && ln -fs /usr/share/ca-certificates/14872.pem /etc/ssl/certs/14872.pem"
	I0925 04:28:20.939820    9153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14872.pem
	I0925 04:28:20.942709    9153 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 25 10:38 /usr/share/ca-certificates/14872.pem
	I0925 04:28:20.942752    9153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14872.pem
	I0925 04:28:20.946432    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14872.pem /etc/ssl/certs/3ec20f2e.0"
	I0925 04:28:20.953017    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0925 04:28:20.959561    9153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:28:20.962577    9153 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 25 10:33 /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:28:20.962612    9153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0925 04:28:20.966209    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0925 04:28:20.973057    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1487.pem && ln -fs /usr/share/ca-certificates/1487.pem /etc/ssl/certs/1487.pem"
	I0925 04:28:20.979581    9153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1487.pem
	I0925 04:28:20.982583    9153 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 25 10:38 /usr/share/ca-certificates/1487.pem
	I0925 04:28:20.982619    9153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1487.pem
	I0925 04:28:20.986171    9153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1487.pem /etc/ssl/certs/51391683.0"
	I0925 04:28:20.992647    9153 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0925 04:28:20.995387    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0925 04:28:20.999026    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0925 04:28:21.002636    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0925 04:28:21.006262    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0925 04:28:21.009856    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0925 04:28:21.013384    9153 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0925 04:28:21.017064    9153 kubeadm.go:404] StartCluster: {Name:embed-certs-952000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.2 ClusterName:embed-certs-952000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 04:28:21.017154    9153 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:28:21.029930    9153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0925 04:28:21.035915    9153 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0925 04:28:21.035934    9153 kubeadm.go:636] restartCluster start
	I0925 04:28:21.035977    9153 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0925 04:28:21.041664    9153 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:21.042047    9153 kubeconfig.go:135] verify returned: extract IP: "embed-certs-952000" does not appear in /Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 04:28:21.042196    9153 kubeconfig.go:146] "embed-certs-952000" context is missing from /Users/jenkins/minikube-integration/17297-1019/kubeconfig - will repair!
	I0925 04:28:21.042484    9153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1019/kubeconfig: {Name:mk089a453556df7022ab2ad95444bff17ceaaa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:28:21.043948    9153 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0925 04:28:21.049741    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:21.049778    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:21.057386    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:21.057394    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:21.057433    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:21.064853    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:21.565884    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:21.565988    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:21.574986    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:22.065798    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:22.065903    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:22.074662    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:22.566187    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:22.566348    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:22.575837    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:23.066771    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:23.066932    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:23.076688    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:23.564988    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:23.565121    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:23.573430    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:24.065746    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:24.065870    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:24.075582    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:24.565576    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:24.565681    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:24.575342    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:25.065365    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:25.065469    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:25.075128    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:25.565010    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:25.565213    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:25.575533    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:26.065982    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:26.066126    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:26.075336    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:26.565750    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:26.565836    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:26.574556    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:27.066602    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:27.066715    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:27.076288    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:27.565275    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:27.565378    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:27.574829    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:28.065737    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:28.065938    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:28.075275    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:28.565821    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:28.565932    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:28.575484    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:29.065566    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:29.065745    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:29.074580    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:29.565800    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:29.565985    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:29.575101    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:30.065501    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:30.065692    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:30.074472    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:30.565651    9153 api_server.go:166] Checking apiserver status ...
	I0925 04:28:30.565738    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0925 04:28:30.573712    9153 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0925 04:28:31.051067    9153 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0925 04:28:31.051101    9153 kubeadm.go:1128] stopping kube-system containers ...
	I0925 04:28:31.051227    9153 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0925 04:28:31.067362    9153 docker.go:463] Stopping containers: [7fa4a0ebb174 9dce62db1db6 101e52f35f8a 8ed83d102101 b134bc054a85 4ad2c63ceb14 ef924609f9a1 ce6773b07b99 596a9b0cbe08 e63740b87413 c6384edd5158 3bbe6f084b06 a74d7783cda3 631ce28f6d7a]
	I0925 04:28:31.067439    9153 ssh_runner.go:195] Run: docker stop 7fa4a0ebb174 9dce62db1db6 101e52f35f8a 8ed83d102101 b134bc054a85 4ad2c63ceb14 ef924609f9a1 ce6773b07b99 596a9b0cbe08 e63740b87413 c6384edd5158 3bbe6f084b06 a74d7783cda3 631ce28f6d7a
	I0925 04:28:31.080988    9153 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0925 04:28:31.091155    9153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0925 04:28:31.097073    9153 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0925 04:28:31.097123    9153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0925 04:28:31.102779    9153 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0925 04:28:31.102787    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:31.177091    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:31.684172    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:31.820790    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:31.875763    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:31.924522    9153 api_server.go:52] waiting for apiserver process to appear ...
	I0925 04:28:31.924598    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:28:31.933995    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:28:32.444866    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:28:32.945351    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:28:31.752907    8765 system_pods.go:86] 8 kube-system pods found
	I0925 04:28:31.752921    8765 system_pods.go:89] "coredns-5644d7b6d9-fhm6m" [261b0b1b-bbe6-420e-af7f-6e7cab5f7c35] Running
	I0925 04:28:31.752926    8765 system_pods.go:89] "etcd-old-k8s-version-596000" [a4089b4d-bb12-4bdf-800a-0aac385df57a] Pending
	I0925 04:28:31.752929    8765 system_pods.go:89] "kube-apiserver-old-k8s-version-596000" [66c79610-9a29-4e3c-838d-035e86a4089c] Pending
	I0925 04:28:31.752933    8765 system_pods.go:89] "kube-controller-manager-old-k8s-version-596000" [26ce3f96-6152-4480-a78a-315a4dbbc8ed] Pending
	I0925 04:28:31.752936    8765 system_pods.go:89] "kube-proxy-9jcsn" [1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7] Running
	I0925 04:28:31.752940    8765 system_pods.go:89] "kube-scheduler-old-k8s-version-596000" [29492770-13f3-480b-b887-d880e8df61a6] Running
	I0925 04:28:31.752945    8765 system_pods.go:89] "metrics-server-74d5856cc6-8mcq6" [e3f5ea81-2285-4c21-9779-49cd559184dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:31.752950    8765 system_pods.go:89] "storage-provisioner" [5b3475c5-65f3-44a9-8561-bf6c52a49a9e] Running
	I0925 04:28:31.752961    8765 retry.go:31] will retry after 10.620928088s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0925 04:28:33.445228    9153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 04:28:33.456910    9153 api_server.go:72] duration metric: took 1.532384418s to wait for apiserver process to appear ...
	I0925 04:28:33.456923    9153 api_server.go:88] waiting for apiserver healthz status ...
	I0925 04:28:33.456938    9153 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0925 04:28:36.011164    9153 api_server.go:279] https://192.168.64.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 04:28:36.011181    9153 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 04:28:36.011189    9153 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0925 04:28:36.027004    9153 api_server.go:279] https://192.168.64.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0925 04:28:36.027019    9153 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0925 04:28:36.529004    9153 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0925 04:28:36.534086    9153 api_server.go:279] https://192.168.64.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 04:28:36.534106    9153 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 04:28:37.027978    9153 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0925 04:28:37.032811    9153 api_server.go:279] https://192.168.64.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0925 04:28:37.032830    9153 api_server.go:103] status: https://192.168.64.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0925 04:28:37.527907    9153 api_server.go:253] Checking apiserver healthz at https://192.168.64.42:8443/healthz ...
	I0925 04:28:37.531261    9153 api_server.go:279] https://192.168.64.42:8443/healthz returned 200:
	ok
	I0925 04:28:37.536735    9153 api_server.go:141] control plane version: v1.28.2
	I0925 04:28:37.536748    9153 api_server.go:131] duration metric: took 4.079806587s to wait for apiserver health ...
	I0925 04:28:37.536756    9153 cni.go:84] Creating CNI manager for ""
	I0925 04:28:37.536782    9153 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 04:28:37.575559    9153 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0925 04:28:37.611269    9153 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0925 04:28:37.627745    9153 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0925 04:28:37.662310    9153 system_pods.go:43] waiting for kube-system pods to appear ...
	I0925 04:28:37.668618    9153 system_pods.go:59] 8 kube-system pods found
	I0925 04:28:37.668636    9153 system_pods.go:61] "coredns-5dd5756b68-vbhcz" [973bbf6e-65be-4aeb-b0fc-47e2ce9d3cd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0925 04:28:37.668649    9153 system_pods.go:61] "etcd-embed-certs-952000" [dde02afb-8926-4fbf-911d-880d65f6e30f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0925 04:28:37.668658    9153 system_pods.go:61] "kube-apiserver-embed-certs-952000" [05d97e32-7bbe-49da-8572-a73bca294976] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0925 04:28:37.668664    9153 system_pods.go:61] "kube-controller-manager-embed-certs-952000" [fa55dc33-cb9c-4276-9948-d4528d774a11] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0925 04:28:37.668669    9153 system_pods.go:61] "kube-proxy-jl6vp" [4c9c4fbd-f6a6-42d4-b579-f62284c71415] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0925 04:28:37.668674    9153 system_pods.go:61] "kube-scheduler-embed-certs-952000" [e13497f3-60b5-4574-9260-0546e88aa2c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0925 04:28:37.668680    9153 system_pods.go:61] "metrics-server-57f55c9bc5-jw9fg" [3ca2111d-356b-4d2b-9438-df139140faec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:37.668686    9153 system_pods.go:61] "storage-provisioner" [bd610097-3842-4dbf-98a5-65f64b4b4c22] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0925 04:28:37.668692    9153 system_pods.go:74] duration metric: took 6.371095ms to wait for pod list to return data ...
	I0925 04:28:37.668706    9153 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:28:37.670987    9153 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 04:28:37.671002    9153 node_conditions.go:123] node cpu capacity is 2
	I0925 04:28:37.671013    9153 node_conditions.go:105] duration metric: took 2.304437ms to run NodePressure ...
	I0925 04:28:37.671028    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0925 04:28:37.910944    9153 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0925 04:28:37.915755    9153 kubeadm.go:787] kubelet initialised
	I0925 04:28:37.915768    9153 kubeadm.go:788] duration metric: took 4.810072ms waiting for restarted kubelet to initialise ...
	I0925 04:28:37.915775    9153 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:28:37.920994    9153 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:37.924777    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.924790    9153 pod_ready.go:81] duration metric: took 3.782305ms waiting for pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:37.924796    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.924802    9153 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:37.931804    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "etcd-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.931819    9153 pod_ready.go:81] duration metric: took 7.011197ms waiting for pod "etcd-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:37.931827    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "etcd-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.931833    9153 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:37.940486    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.940500    9153 pod_ready.go:81] duration metric: took 8.662274ms waiting for pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:37.940510    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:37.940529    9153 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:38.065407    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.065422    9153 pod_ready.go:81] duration metric: took 124.885749ms waiting for pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:38.065428    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.065434    9153 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jl6vp" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:38.466660    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "kube-proxy-jl6vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.466677    9153 pod_ready.go:81] duration metric: took 401.236799ms waiting for pod "kube-proxy-jl6vp" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:38.466686    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "kube-proxy-jl6vp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.466691    9153 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:38.864591    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.864606    9153 pod_ready.go:81] duration metric: took 397.90568ms waiting for pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:38.864612    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:38.864617    9153 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:39.265789    9153 pod_ready.go:97] node "embed-certs-952000" hosting pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:39.265808    9153 pod_ready.go:81] duration metric: took 401.183287ms waiting for pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace to be "Ready" ...
	E0925 04:28:39.265817    9153 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-952000" hosting pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-952000" has status "Ready":"False"
	I0925 04:28:39.265827    9153 pod_ready.go:38] duration metric: took 1.350039768s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:28:39.265840    9153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0925 04:28:39.273913    9153 ops.go:34] apiserver oom_adj: -16
	I0925 04:28:39.273926    9153 kubeadm.go:640] restartCluster took 18.23792422s
	I0925 04:28:39.273930    9153 kubeadm.go:406] StartCluster complete in 18.256814449s
	I0925 04:28:39.273940    9153 settings.go:142] acquiring lock: {Name:mk37b0392249d6bd036812bb5e31347cdeef3bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:28:39.274020    9153 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 04:28:39.274775    9153 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17297-1019/kubeconfig: {Name:mk089a453556df7022ab2ad95444bff17ceaaa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0925 04:28:39.275040    9153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0925 04:28:39.275078    9153 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0925 04:28:39.275125    9153 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-952000"
	I0925 04:28:39.275135    9153 addons.go:69] Setting default-storageclass=true in profile "embed-certs-952000"
	I0925 04:28:39.275138    9153 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-952000"
	I0925 04:28:39.275143    9153 addons.go:69] Setting metrics-server=true in profile "embed-certs-952000"
	W0925 04:28:39.275149    9153 addons.go:240] addon storage-provisioner should already be in state true
	I0925 04:28:39.275152    9153 addons.go:69] Setting dashboard=true in profile "embed-certs-952000"
	I0925 04:28:39.275159    9153 addons.go:231] Setting addon metrics-server=true in "embed-certs-952000"
	I0925 04:28:39.275161    9153 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-952000"
	W0925 04:28:39.275172    9153 addons.go:240] addon metrics-server should already be in state true
	I0925 04:28:39.275175    9153 config.go:182] Loaded profile config "embed-certs-952000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 04:28:39.275176    9153 addons.go:231] Setting addon dashboard=true in "embed-certs-952000"
	W0925 04:28:39.275187    9153 addons.go:240] addon dashboard should already be in state true
	I0925 04:28:39.275195    9153 host.go:66] Checking if "embed-certs-952000" exists ...
	I0925 04:28:39.275209    9153 host.go:66] Checking if "embed-certs-952000" exists ...
	I0925 04:28:39.275217    9153 host.go:66] Checking if "embed-certs-952000" exists ...
	I0925 04:28:39.275473    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.275476    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.275474    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.275503    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.275509    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.275511    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.275558    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.275572    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.284060    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56490
	I0925 04:28:39.284478    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.285070    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.285092    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.285102    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56492
	I0925 04:28:39.285385    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.285613    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.285812    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.285849    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.286782    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.286782    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56494
	I0925 04:28:39.286876    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.287883    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.288008    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.288012    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56496
	I0925 04:28:39.288450    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.288467    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.288492    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.288503    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.288528    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.289423    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.289537    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.289830    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.290799    9153 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-952000" context rescaled to 1 replicas
	I0925 04:28:39.290822    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.290827    9153 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.64.42 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0925 04:28:39.312481    9153 out.go:177] * Verifying Kubernetes components...
	I0925 04:28:39.290870    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.290969    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:39.294399    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56498
	I0925 04:28:39.354155    9153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:28:39.296021    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56499
	I0925 04:28:39.312536    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.354327    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:39.354426    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:39.355393    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.355395    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.356739    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.356760    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.356761    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.356780    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.357012    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.357030    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.357149    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:39.357153    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:39.357262    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:39.357282    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:39.357328    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:39.357338    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:39.358550    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:39.358555    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:39.380089    9153 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0925 04:28:39.361953    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56502
	I0925 04:28:39.362580    9153 addons.go:231] Setting addon default-storageclass=true in "embed-certs-952000"
	I0925 04:28:39.401235    9153 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0925 04:28:39.401242    9153 addons.go:240] addon default-storageclass should already be in state true
	I0925 04:28:39.401261    9153 host.go:66] Checking if "embed-certs-952000" exists ...
	I0925 04:28:39.401277    9153 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0925 04:28:39.422079    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0925 04:28:39.422091    9153 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:28:39.422099    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0925 04:28:39.422101    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:39.401665    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.422114    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:39.422259    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:39.422259    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:39.422367    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.422384    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.422387    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:39.422397    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:39.422494    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:39.422510    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:39.423155    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:39.423162    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:39.423163    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.423382    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.423975    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.424290    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:39.424515    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:39.424600    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:39.425628    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:39.447038    9153 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0925 04:28:39.429768    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56506
	I0925 04:28:39.436011    9153 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0925 04:28:39.436026    9153 node_ready.go:35] waiting up to 6m0s for node "embed-certs-952000" to be "Ready" ...
	I0925 04:28:39.489154    9153 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0925 04:28:39.447445    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.477266    9153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0925 04:28:39.488498    9153 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0925 04:28:39.509998    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0925 04:28:39.510073    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0925 04:28:39.510088    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0925 04:28:39.510108    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:39.510316    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:39.510480    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:39.510552    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.510570    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.510637    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:39.510792    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:39.510843    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.511327    9153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 04:28:39.511355    9153 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 04:28:39.518770    9153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56509
	I0925 04:28:39.519115    9153 main.go:141] libmachine: () Calling .GetVersion
	I0925 04:28:39.519463    9153 main.go:141] libmachine: Using API Version  1
	I0925 04:28:39.519474    9153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 04:28:39.519681    9153 main.go:141] libmachine: () Calling .GetMachineName
	I0925 04:28:39.519780    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetState
	I0925 04:28:39.519857    9153 main.go:141] libmachine: (embed-certs-952000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 04:28:39.519937    9153 main.go:141] libmachine: (embed-certs-952000) DBG | hyperkit pid from json: 9164
	I0925 04:28:39.520929    9153 main.go:141] libmachine: (embed-certs-952000) Calling .DriverName
	I0925 04:28:39.521090    9153 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0925 04:28:39.521098    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0925 04:28:39.521107    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHHostname
	I0925 04:28:39.521198    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHPort
	I0925 04:28:39.521275    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHKeyPath
	I0925 04:28:39.521359    9153 main.go:141] libmachine: (embed-certs-952000) Calling .GetSSHUsername
	I0925 04:28:39.521437    9153 sshutil.go:53] new ssh client: &{IP:192.168.64.42 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/embed-certs-952000/id_rsa Username:docker}
	I0925 04:28:39.523980    9153 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0925 04:28:39.523990    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0925 04:28:39.550278    9153 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 04:28:39.550290    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0925 04:28:39.580430    9153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0925 04:28:39.605117    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0925 04:28:39.605131    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0925 04:28:39.611083    9153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0925 04:28:39.706953    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0925 04:28:39.706969    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0925 04:28:39.757871    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0925 04:28:39.757884    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0925 04:28:39.771737    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0925 04:28:39.771749    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0925 04:28:39.802397    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0925 04:28:39.802410    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0925 04:28:39.843535    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0925 04:28:39.843548    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0925 04:28:39.872163    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0925 04:28:39.872175    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0925 04:28:39.886883    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0925 04:28:39.886894    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0925 04:28:39.904503    9153 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 04:28:39.904516    9153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0925 04:28:39.921894    9153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0925 04:28:40.678407    9153 node_ready.go:49] node "embed-certs-952000" has status "Ready":"True"
	I0925 04:28:40.678422    9153 node_ready.go:38] duration metric: took 1.210388259s waiting for node "embed-certs-952000" to be "Ready" ...
	I0925 04:28:40.678428    9153 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0925 04:28:40.682158    9153 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:40.833307    9153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.323268343s)
	I0925 04:28:40.833343    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833355    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833369    9153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.252913151s)
	I0925 04:28:40.833398    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833397    9153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.222292287s)
	I0925 04:28:40.833409    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833444    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833461    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833610    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833628    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833640    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833645    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833647    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833651    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833653    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833655    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833660    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833668    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833675    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833658    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833686    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833695    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833685    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833847    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833847    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833860    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833876    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:40.833881    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833883    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:40.833912    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833922    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833929    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.833962    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.833979    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:40.833990    9153 addons.go:467] Verifying addon metrics-server=true in "embed-certs-952000"
	I0925 04:28:40.834062    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:40.834109    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:40.834124    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:41.118030    9153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.196091364s)
	I0925 04:28:41.118062    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:41.118070    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:41.118223    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:41.118233    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:41.118241    9153 main.go:141] libmachine: Making call to close driver server
	I0925 04:28:41.118249    9153 main.go:141] libmachine: (embed-certs-952000) Calling .Close
	I0925 04:28:41.118251    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:41.118386    9153 main.go:141] libmachine: (embed-certs-952000) DBG | Closing plugin on server side
	I0925 04:28:41.118406    9153 main.go:141] libmachine: Successfully made call to close driver server
	I0925 04:28:41.118418    9153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0925 04:28:41.139799    9153 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-952000 addons enable metrics-server	
	
	
	I0925 04:28:41.197436    9153 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0925 04:28:42.378767    8765 system_pods.go:86] 8 kube-system pods found
	I0925 04:28:42.378780    8765 system_pods.go:89] "coredns-5644d7b6d9-fhm6m" [261b0b1b-bbe6-420e-af7f-6e7cab5f7c35] Running
	I0925 04:28:42.378784    8765 system_pods.go:89] "etcd-old-k8s-version-596000" [a4089b4d-bb12-4bdf-800a-0aac385df57a] Running
	I0925 04:28:42.378788    8765 system_pods.go:89] "kube-apiserver-old-k8s-version-596000" [66c79610-9a29-4e3c-838d-035e86a4089c] Running
	I0925 04:28:42.378804    8765 system_pods.go:89] "kube-controller-manager-old-k8s-version-596000" [26ce3f96-6152-4480-a78a-315a4dbbc8ed] Running
	I0925 04:28:42.378812    8765 system_pods.go:89] "kube-proxy-9jcsn" [1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7] Running
	I0925 04:28:42.378819    8765 system_pods.go:89] "kube-scheduler-old-k8s-version-596000" [29492770-13f3-480b-b887-d880e8df61a6] Running
	I0925 04:28:42.378828    8765 system_pods.go:89] "metrics-server-74d5856cc6-8mcq6" [e3f5ea81-2285-4c21-9779-49cd559184dc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0925 04:28:42.378835    8765 system_pods.go:89] "storage-provisioner" [5b3475c5-65f3-44a9-8561-bf6c52a49a9e] Running
	I0925 04:28:42.378841    8765 system_pods.go:126] duration metric: took 1m4.323877371s to wait for k8s-apps to be running ...
	I0925 04:28:42.378846    8765 system_svc.go:44] waiting for kubelet service to be running ....
	I0925 04:28:42.378899    8765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 04:28:42.388996    8765 system_svc.go:56] duration metric: took 10.145002ms WaitForService to wait for kubelet.
	I0925 04:28:42.389012    8765 kubeadm.go:581] duration metric: took 1m13.793225187s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0925 04:28:42.389030    8765 node_conditions.go:102] verifying NodePressure condition ...
	I0925 04:28:42.391070    8765 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0925 04:28:42.391084    8765 node_conditions.go:123] node cpu capacity is 2
	I0925 04:28:42.391090    8765 node_conditions.go:105] duration metric: took 2.057781ms to run NodePressure ...
	I0925 04:28:42.391098    8765 start.go:228] waiting for startup goroutines ...
	I0925 04:28:42.391103    8765 start.go:233] waiting for cluster config update ...
	I0925 04:28:42.391114    8765 start.go:242] writing updated cluster config ...
	I0925 04:28:42.391429    8765 ssh_runner.go:195] Run: rm -f paused
	I0925 04:28:42.428749    8765 start.go:600] kubectl: 1.27.2, cluster: 1.16.0 (minor skew: 11)
	I0925 04:28:42.452877    8765 out.go:177] 
	W0925 04:28:42.474552    8765 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.16.0.
	I0925 04:28:42.495668    8765 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0925 04:28:42.537530    8765 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-596000" cluster and "default" namespace by default
	I0925 04:28:41.270664    9153 addons.go:502] enable addons completed in 1.995590067s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0925 04:28:43.071499    9153 pod_ready.go:102] pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace has status "Ready":"False"
	I0925 04:28:45.571471    9153 pod_ready.go:102] pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace has status "Ready":"False"
	I0925 04:28:46.072170    9153 pod_ready.go:92] pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:46.072183    9153 pod_ready.go:81] duration metric: took 5.389995906s waiting for pod "coredns-5dd5756b68-vbhcz" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:46.072189    9153 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:48.083497    9153 pod_ready.go:102] pod "etcd-embed-certs-952000" in "kube-system" namespace has status "Ready":"False"
	I0925 04:28:48.590037    9153 pod_ready.go:92] pod "etcd-embed-certs-952000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:48.590050    9153 pod_ready.go:81] duration metric: took 2.51784815s waiting for pod "etcd-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:48.590058    9153 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:48.608877    9153 pod_ready.go:92] pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:48.608890    9153 pod_ready.go:81] duration metric: took 18.826851ms waiting for pod "kube-apiserver-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:48.608897    9153 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.071756    9153 pod_ready.go:92] pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:50.071767    9153 pod_ready.go:81] duration metric: took 1.462861153s waiting for pod "kube-controller-manager-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.071774    9153 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jl6vp" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.269695    9153 pod_ready.go:92] pod "kube-proxy-jl6vp" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:50.269706    9153 pod_ready.go:81] duration metric: took 197.927657ms waiting for pod "kube-proxy-jl6vp" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.269712    9153 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.665958    9153 pod_ready.go:92] pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace has status "Ready":"True"
	I0925 04:28:50.665970    9153 pod_ready.go:81] duration metric: took 396.251712ms waiting for pod "kube-scheduler-embed-certs-952000" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:50.665976    9153 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace to be "Ready" ...
	I0925 04:28:52.970964    9153 pod_ready.go:102] pod "metrics-server-57f55c9bc5-jw9fg" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-25 11:20:54 UTC, ends at Mon 2023-09-25 11:28:53 UTC. --
	Sep 25 11:27:44 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:27:44.043401928Z" level=info msg="shim disconnected" id=3933a1cd85ca25d57f92c33ac343bfe9c9bd3c0dd20df263ca7745a95d7c38ec namespace=moby
	Sep 25 11:27:44 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:27:44.044250167Z" level=warning msg="cleaning up after shim disconnected" id=3933a1cd85ca25d57f92c33ac343bfe9c9bd3c0dd20df263ca7745a95d7c38ec namespace=moby
	Sep 25 11:27:44 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:27:44.044353196Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:27:46 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:27:46.932483736Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:27:46 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:27:46.933184372Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:27:46 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:27:46.934327119Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.098955232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.099019414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.099403836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.099418566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:28:03.378479615Z" level=info msg="ignoring event" container=0be11ea9d18c37aad81694d7ec43ea642ccf67728782aefb6c62afed0e6c4d3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.378947986Z" level=info msg="shim disconnected" id=0be11ea9d18c37aad81694d7ec43ea642ccf67728782aefb6c62afed0e6c4d3e namespace=moby
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.379048385Z" level=warning msg="cleaning up after shim disconnected" id=0be11ea9d18c37aad81694d7ec43ea642ccf67728782aefb6c62afed0e6c4d3e namespace=moby
	Sep 25 11:28:03 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:03.379080647Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 25 11:28:11 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:28:11.930178721Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:28:11 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:28:11.930217529Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:28:11 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:28:11.931687889Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:28:34 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:34.969602319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 25 11:28:34 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:34.969900812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:28:34 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:34.970029503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 25 11:28:34 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:34.970091935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 25 11:28:35 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:35.244480454Z" level=info msg="shim disconnected" id=4a1b756bd3e299be0966589705dcd15a0f4a9e4bc9159ce48ff1790fd8eafec0 namespace=moby
	Sep 25 11:28:35 old-k8s-version-596000 dockerd[1163]: time="2023-09-25T11:28:35.244572405Z" level=info msg="ignoring event" container=4a1b756bd3e299be0966589705dcd15a0f4a9e4bc9159ce48ff1790fd8eafec0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 25 11:28:35 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:35.244780783Z" level=warning msg="cleaning up after shim disconnected" id=4a1b756bd3e299be0966589705dcd15a0f4a9e4bc9159ce48ff1790fd8eafec0 namespace=moby
	Sep 25 11:28:35 old-k8s-version-596000 dockerd[1169]: time="2023-09-25T11:28:35.244825871Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	4a1b756bd3e2   a90209bb39e3             "nginx -g 'daemon of…"   19 seconds ago       Exited (1) 18 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard_959ea4ad-dd53-4edd-9b0c-3dc1f22fde25_3
	11e6f66d7d1e   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-h2hm9_kubernetes-dashboard_47e68605-826a-4bf6-8f58-e49bb6b880fd_0
	10ec9d0f85bf   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard_959ea4ad-dd53-4edd-9b0c-3dc1f22fde25_0
	1affbc9072d9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-h2hm9_kubernetes-dashboard_47e68605-826a-4bf6-8f58-e49bb6b880fd_0
	a0c8d5d59637   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-8mcq6_kube-system_e3f5ea81-2285-4c21-9779-49cd559184dc_0
	1826970338f8   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_5b3475c5-65f3-44a9-8561-bf6c52a49a9e_0
	56b89dea6f68   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-fhm6m_kube-system_261b0b1b-bbe6-420e-af7f-6e7cab5f7c35_0
	e368cc3e8a0e   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_5b3475c5-65f3-44a9-8561-bf6c52a49a9e_0
	9918c73a325c   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-fhm6m_kube-system_261b0b1b-bbe6-420e-af7f-6e7cab5f7c35_0
	9b72b8dd9942   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-9jcsn_kube-system_1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7_0
	4276f9894a19   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-9jcsn_kube-system_1329f3cf-e0f3-43b7-8d1e-23961ea3ffe7_0
	8ef8d62c0a2f   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-596000_kube-system_a7a4ab991673adbe74e24224e670c8b4_0
	a829220abd9f   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-596000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	ab36bd02c56d   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-596000_kube-system_7ab33bae85b14572b4945e1e94beb673_0
	4ee8f9066e9e   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-596000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	fddcbbcbe3b9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-596000_kube-system_a7a4ab991673adbe74e24224e670c8b4_0
	b09be620b570   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-596000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	118c8922ee29   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-596000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	324a7648f364   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-596000_kube-system_7ab33bae85b14572b4945e1e94beb673_0
	time="2023-09-25T11:28:53Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [56b89dea6f68] <==
	* .:53
	2023-09-25T11:27:30.243Z [INFO] plugin/reload: Running configuration MD5 = 46cbc15810136842e5653e578aaacfef
	2023-09-25T11:27:30.243Z [INFO] CoreDNS-1.6.2
	2023-09-25T11:27:30.243Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-25T11:27:30.248Z [INFO] 127.0.0.1:59820 - 3726 "HINFO IN 1083097832645627984.6742915448314799892. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004226111s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-596000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-596000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1bf6c3d5317028f348e55ea19d261973a6487d3c
	                    minikube.k8s.io/name=old-k8s-version-596000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_25T04_27_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 25 Sep 2023 11:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 25 Sep 2023 11:28:07 +0000   Mon, 25 Sep 2023 11:27:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 25 Sep 2023 11:28:07 +0000   Mon, 25 Sep 2023 11:27:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 25 Sep 2023 11:28:07 +0000   Mon, 25 Sep 2023 11:27:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 25 Sep 2023 11:28:07 +0000   Mon, 25 Sep 2023 11:27:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.40
	  Hostname:    old-k8s-version-596000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	System Info:
	 Machine ID:                 216ad66d17e04de29aa366b932b26216
	 System UUID:                34f011ee-0000-0000-88a8-149d997fca88
	 Boot ID:                    fecdf2a5-fae2-47ab-81be-3584e62b17a1
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-fhm6m                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-596000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                kube-apiserver-old-k8s-version-596000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                kube-controller-manager-old-k8s-version-596000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  kube-system                kube-proxy-9jcsn                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                kube-scheduler-old-k8s-version-596000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                metrics-server-74d5856cc6-8mcq6                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-lfvc8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-h2hm9             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  Starting                 112s                 kubelet, old-k8s-version-596000     Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet, old-k8s-version-596000     Node old-k8s-version-596000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet, old-k8s-version-596000     Node old-k8s-version-596000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet, old-k8s-version-596000     Node old-k8s-version-596000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet, old-k8s-version-596000     Updated Node Allocatable limit across pods
	  Normal  Starting                 85s                  kube-proxy, old-k8s-version-596000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.028784] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +4.972545] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007163] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.195763] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.039969] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.852062] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep25 11:21] systemd-fstab-generator[519]: Ignoring "noauto" for root device
	[  +0.088218] systemd-fstab-generator[530]: Ignoring "noauto" for root device
	[  +0.758521] systemd-fstab-generator[783]: Ignoring "noauto" for root device
	[  +0.218437] systemd-fstab-generator[820]: Ignoring "noauto" for root device
	[  +0.087554] systemd-fstab-generator[831]: Ignoring "noauto" for root device
	[  +0.122881] systemd-fstab-generator[859]: Ignoring "noauto" for root device
	[  +6.068803] systemd-fstab-generator[1154]: Ignoring "noauto" for root device
	[  +1.716451] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.130874] systemd-fstab-generator[1610]: Ignoring "noauto" for root device
	[ +20.571665] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.080099] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep25 11:22] kauditd_printk_skb: 5 callbacks suppressed
	[Sep25 11:26] systemd-fstab-generator[6873]: Ignoring "noauto" for root device
	[Sep25 11:27] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.043629] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [8ef8d62c0a2f] <==
	* 2023-09-25 11:27:04.746531 I | etcdserver: starting member 8700b5f30fd8925d in cluster 39244c9d1c1d508b
	2023-09-25 11:27:04.763441 I | raft: 8700b5f30fd8925d became follower at term 0
	2023-09-25 11:27:04.767668 I | raft: newRaft 8700b5f30fd8925d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-09-25 11:27:04.770511 I | raft: 8700b5f30fd8925d became follower at term 1
	2023-09-25 11:27:04.816261 W | auth: simple token is not cryptographically signed
	2023-09-25 11:27:04.817817 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-09-25 11:27:04.821822 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-25 11:27:04.822082 I | embed: listening for metrics on http://192.168.64.40:2381
	2023-09-25 11:27:04.825106 I | etcdserver: 8700b5f30fd8925d as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-25 11:27:04.825854 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-25 11:27:04.825941 I | etcdserver/membership: added member 8700b5f30fd8925d [https://192.168.64.40:2380] to cluster 39244c9d1c1d508b
	2023-09-25 11:27:05.393162 I | raft: 8700b5f30fd8925d is starting a new election at term 1
	2023-09-25 11:27:05.393205 I | raft: 8700b5f30fd8925d became candidate at term 2
	2023-09-25 11:27:05.393214 I | raft: 8700b5f30fd8925d received MsgVoteResp from 8700b5f30fd8925d at term 2
	2023-09-25 11:27:05.393221 I | raft: 8700b5f30fd8925d became leader at term 2
	2023-09-25 11:27:05.393225 I | raft: raft.node: 8700b5f30fd8925d elected leader 8700b5f30fd8925d at term 2
	2023-09-25 11:27:05.393624 I | embed: ready to serve client requests
	2023-09-25 11:27:05.393676 I | etcdserver: published {Name:old-k8s-version-596000 ClientURLs:[https://192.168.64.40:2379]} to cluster 39244c9d1c1d508b
	2023-09-25 11:27:05.393865 I | embed: ready to serve client requests
	2023-09-25 11:27:05.394714 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-25 11:27:05.394848 I | embed: serving client requests on 192.168.64.40:2379
	2023-09-25 11:27:05.394882 I | etcdserver: setting up the initial cluster version to 3.3
	2023-09-25 11:27:05.410147 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-25 11:27:05.410240 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-25 11:27:28.819487 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-9bzj5\" " with result "range_response_count:1 size:1447" took too long (113.687882ms) to execute
	
	* 
	* ==> kernel <==
	*  11:28:54 up 8 min,  0 users,  load average: 0.31, 0.33, 0.17
	Linux old-k8s-version-596000 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ab36bd02c56d] <==
	* I0925 11:27:08.664549       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0925 11:27:08.664589       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0925 11:27:08.676393       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0925 11:27:08.680742       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0925 11:27:08.680799       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0925 11:27:10.445653       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0925 11:27:10.725680       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0925 11:27:11.015270       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.64.40]
	I0925 11:27:11.015864       1 controller.go:606] quota admission added evaluator for: endpoints
	I0925 11:27:11.944718       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0925 11:27:12.437251       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0925 11:27:12.739685       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0925 11:27:28.446369       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0925 11:27:28.470281       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0925 11:27:28.895467       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0925 11:27:31.332214       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:27:31.332257       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:27:31.332329       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:27:31.332336       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0925 11:28:31.337350       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0925 11:28:31.337539       1 handler_proxy.go:99] no RequestInfo found in the context
	E0925 11:28:31.337639       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0925 11:28:31.337660       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [4ee8f9066e9e] <==
	* I0925 11:27:29.022533       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0925 11:27:29.765009       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"e15f2e60-d093-47c5-93ca-f6d0b6dc3f38", APIVersion:"apps/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-74d5856cc6 to 1
	I0925 11:27:29.772928       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"16720cae-50e8-4edf-b9ff-1c782c9ff24d", APIVersion:"apps/v1", ResourceVersion:"388", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "metrics-server-74d5856cc6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0925 11:27:29.883475       1 replica_set.go:450] Sync "kube-system/metrics-server-74d5856cc6" failed with pods "metrics-server-74d5856cc6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0925 11:27:29.919930       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0925 11:27:30.030192       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0925 11:27:30.320503       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"8e5dd157-2a0c-4319-850e-ffffc78f74da", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-d6b4b5544 to 1
	I0925 11:27:30.350215       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"ba532e2d-6080-41fa-a567-edd5d2d630d6", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.357601       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"3f78dc83-d98c-4fef-a2c3-7b8ecf77c53a", APIVersion:"apps/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	E0925 11:27:30.357749       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0925 11:27:30.362613       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.362727       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"ba532e2d-6080-41fa-a567-edd5d2d630d6", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.366659       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"6a46b785-9156-4843-beec-90b7d18a0c24", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0925 11:27:30.373572       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0925 11:27:30.388337       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.388756       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"ba532e2d-6080-41fa-a567-edd5d2d630d6", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0925 11:27:30.400146       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.400602       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"6a46b785-9156-4843-beec-90b7d18a0c24", APIVersion:"apps/v1", ResourceVersion:"438", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0925 11:27:30.888880       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"16720cae-50e8-4edf-b9ff-1c782c9ff24d", APIVersion:"apps/v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-8mcq6
	I0925 11:27:31.407286       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"6a46b785-9156-4843-beec-90b7d18a0c24", APIVersion:"apps/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-h2hm9
	I0925 11:27:31.419013       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"ba532e2d-6080-41fa-a567-edd5d2d630d6", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-lfvc8
	E0925 11:27:59.176924       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:28:02.033344       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0925 11:28:29.432783       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0925 11:28:34.035211       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [9b72b8dd9942] <==
	* W0925 11:27:29.701993       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0925 11:27:29.738193       1 node.go:135] Successfully retrieved node IP: 192.168.64.40
	I0925 11:27:29.738214       1 server_others.go:149] Using iptables Proxier.
	I0925 11:27:29.747291       1 server.go:529] Version: v1.16.0
	I0925 11:27:29.775237       1 config.go:313] Starting service config controller
	I0925 11:27:29.775503       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0925 11:27:29.775521       1 config.go:131] Starting endpoints config controller
	I0925 11:27:29.775587       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0925 11:27:29.875914       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0925 11:27:29.875979       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a829220abd9f] <==
	* I0925 11:27:07.732498       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0925 11:27:07.733154       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0925 11:27:07.769765       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:27:07.772565       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:27:07.772831       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:27:07.773035       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:27:07.773368       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:27:07.774705       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:27:07.774788       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:27:07.774905       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:27:07.774989       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:27:07.775005       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:27:07.777126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:27:08.773439       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0925 11:27:08.775422       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0925 11:27:08.778396       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0925 11:27:08.780325       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0925 11:27:08.781626       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0925 11:27:08.783439       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0925 11:27:08.784425       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0925 11:27:08.786075       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0925 11:27:08.788778       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0925 11:27:08.792115       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0925 11:27:08.794700       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0925 11:27:28.476294       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-25 11:20:54 UTC, ends at Mon 2023-09-25 11:28:55 UTC. --
	Sep 25 11:27:45 old-k8s-version-596000 kubelet[6879]: E0925 11:27:45.746884    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:27:46 old-k8s-version-596000 kubelet[6879]: E0925 11:27:46.934711    6879 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:27:46 old-k8s-version-596000 kubelet[6879]: E0925 11:27:46.934812    6879 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:27:46 old-k8s-version-596000 kubelet[6879]: E0925 11:27:46.934860    6879 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:27:46 old-k8s-version-596000 kubelet[6879]: E0925 11:27:46.934880    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:27:48 old-k8s-version-596000 kubelet[6879]: E0925 11:27:48.109367    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:27:59 old-k8s-version-596000 kubelet[6879]: E0925 11:27:59.927920    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 25 11:28:03 old-k8s-version-596000 kubelet[6879]: W0925 11:28:03.850919    6879 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-lfvc8 through plugin: invalid network status for
	Sep 25 11:28:03 old-k8s-version-596000 kubelet[6879]: E0925 11:28:03.855765    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:04 old-k8s-version-596000 kubelet[6879]: W0925 11:28:04.861382    6879 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-lfvc8 through plugin: invalid network status for
	Sep 25 11:28:08 old-k8s-version-596000 kubelet[6879]: E0925 11:28:08.108185    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:11 old-k8s-version-596000 kubelet[6879]: E0925 11:28:11.932158    6879 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:28:11 old-k8s-version-596000 kubelet[6879]: E0925 11:28:11.932495    6879 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:28:11 old-k8s-version-596000 kubelet[6879]: E0925 11:28:11.932572    6879 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host
	Sep 25 11:28:11 old-k8s-version-596000 kubelet[6879]: E0925 11:28:11.932689    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.64.1:53: no such host"
	Sep 25 11:28:21 old-k8s-version-596000 kubelet[6879]: E0925 11:28:21.925076    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:25 old-k8s-version-596000 kubelet[6879]: E0925 11:28:25.927399    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 25 11:28:35 old-k8s-version-596000 kubelet[6879]: W0925 11:28:35.031719    6879 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-lfvc8 through plugin: invalid network status for
	Sep 25 11:28:36 old-k8s-version-596000 kubelet[6879]: W0925 11:28:36.240088    6879 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-lfvc8 through plugin: invalid network status for
	Sep 25 11:28:36 old-k8s-version-596000 kubelet[6879]: E0925 11:28:36.244519    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:36 old-k8s-version-596000 kubelet[6879]: E0925 11:28:36.929393    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 25 11:28:37 old-k8s-version-596000 kubelet[6879]: W0925 11:28:37.250813    6879 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-lfvc8 through plugin: invalid network status for
	Sep 25 11:28:38 old-k8s-version-596000 kubelet[6879]: E0925 11:28:38.108342    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:48 old-k8s-version-596000 kubelet[6879]: E0925 11:28:48.928190    6879 pod_workers.go:191] Error syncing pod 959ea4ad-dd53-4edd-9b0c-3dc1f22fde25 ("dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-lfvc8_kubernetes-dashboard(959ea4ad-dd53-4edd-9b0c-3dc1f22fde25)"
	Sep 25 11:28:51 old-k8s-version-596000 kubelet[6879]: E0925 11:28:51.927118    6879 pod_workers.go:191] Error syncing pod e3f5ea81-2285-4c21-9779-49cd559184dc ("metrics-server-74d5856cc6-8mcq6_kube-system(e3f5ea81-2285-4c21-9779-49cd559184dc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [11e6f66d7d1e] <==
	* 2023/09/25 11:27:37 Using namespace: kubernetes-dashboard
	2023/09/25 11:27:37 Using in-cluster config to connect to apiserver
	2023/09/25 11:27:37 Using secret token for csrf signing
	2023/09/25 11:27:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/09/25 11:27:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/09/25 11:27:37 Successful initial request to the apiserver, version: v1.16.0
	2023/09/25 11:27:37 Generating JWE encryption key
	2023/09/25 11:27:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/09/25 11:27:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/09/25 11:27:37 Initializing JWE encryption key from synchronized object
	2023/09/25 11:27:37 Creating in-cluster Sidecar client
	2023/09/25 11:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:27:37 Serving insecurely on HTTP port: 9090
	2023/09/25 11:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/09/25 11:27:37 Starting overwatch
	
	* 
	* ==> storage-provisioner [1826970338f8] <==
	* I0925 11:27:30.332078       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0925 11:27:30.390477       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0925 11:27:30.390737       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0925 11:27:30.417200       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0925 11:27:30.417629       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-596000_553a453a-6a73-4a2e-b079-e7e2b249b66e!
	I0925 11:27:30.422418       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5aec8f39-d843-4b0e-bee7-3b6a62595cec", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-596000_553a453a-6a73-4a2e-b079-e7e2b249b66e became leader
	I0925 11:27:30.518063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-596000_553a453a-6a73-4a2e-b079-e7e2b249b66e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-596000 -n old-k8s-version-596000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-596000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-8mcq6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-596000 describe pod metrics-server-74d5856cc6-8mcq6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-596000 describe pod metrics-server-74d5856cc6-8mcq6: exit status 1 (50.392251ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-8mcq6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-596000 describe pod metrics-server-74d5856cc6-8mcq6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.69s)

                                                
                                    

Test pass (295/318)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.27
10 TestDownloadOnly/v1.28.2/json-events 7.49
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.32
16 TestDownloadOnly/DeleteAll 0.37
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
19 TestBinaryMirror 1
20 TestOffline 55.98
22 TestAddons/Setup 144.76
24 TestAddons/parallel/Registry 14.75
25 TestAddons/parallel/Ingress 20.48
26 TestAddons/parallel/InspektorGadget 10.5
27 TestAddons/parallel/MetricsServer 5.53
28 TestAddons/parallel/HelmTiller 15.42
30 TestAddons/parallel/CSI 60.41
31 TestAddons/parallel/Headlamp 13.19
32 TestAddons/parallel/CloudSpanner 5.43
35 TestAddons/serial/GCPAuth/Namespaces 0.09
36 TestAddons/StoppedEnableDisable 5.69
37 TestCertOptions 37.85
38 TestCertExpiration 242.21
39 TestDockerFlags 39.97
40 TestForceSystemdFlag 36.67
44 TestHyperKitDriverInstallOrUpdate 6.72
47 TestErrorSpam/setup 33
48 TestErrorSpam/start 1.43
49 TestErrorSpam/status 0.44
50 TestErrorSpam/pause 1.24
51 TestErrorSpam/unpause 1.26
52 TestErrorSpam/stop 3.65
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 50.42
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 39.6
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.05
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.3
64 TestFunctional/serial/CacheCmd/cache/add_local 1.57
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.16
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.34
69 TestFunctional/serial/CacheCmd/cache/delete 0.13
70 TestFunctional/serial/MinikubeKubectlCmd 0.54
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.73
72 TestFunctional/serial/ExtraConfig 39.31
73 TestFunctional/serial/ComponentHealth 0.05
74 TestFunctional/serial/LogsCmd 2.71
75 TestFunctional/serial/LogsFileCmd 2.67
76 TestFunctional/serial/InvalidService 4.07
78 TestFunctional/parallel/ConfigCmd 0.48
79 TestFunctional/parallel/DashboardCmd 14.73
80 TestFunctional/parallel/DryRun 1.23
81 TestFunctional/parallel/InternationalLanguage 0.72
82 TestFunctional/parallel/StatusCmd 0.47
86 TestFunctional/parallel/ServiceCmdConnect 11.35
87 TestFunctional/parallel/AddonsCmd 0.21
88 TestFunctional/parallel/PersistentVolumeClaim 34.09
90 TestFunctional/parallel/SSHCmd 0.28
91 TestFunctional/parallel/CpCmd 0.57
92 TestFunctional/parallel/MySQL 27.16
93 TestFunctional/parallel/FileSync 0.14
94 TestFunctional/parallel/CertSync 0.8
98 TestFunctional/parallel/NodeLabels 0.05
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
102 TestFunctional/parallel/License 0.48
103 TestFunctional/parallel/Version/short 0.1
104 TestFunctional/parallel/Version/components 0.43
106 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
107 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
108 TestFunctional/parallel/ImageCommands/ImageListTable 0.13
109 TestFunctional/parallel/ImageCommands/ImageListJson 0.14
110 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
111 TestFunctional/parallel/ImageCommands/ImageBuild 2.24
112 TestFunctional/parallel/ImageCommands/Setup 2.35
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.19
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.98
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.91
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.71
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
125 TestFunctional/parallel/DockerEnv/bash 0.64
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.38
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.96
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.12
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
134 TestFunctional/parallel/ProfileCmd/profile_list 0.28
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.24
136 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
137 TestFunctional/parallel/MountCmd/any-port 7.18
138 TestFunctional/parallel/MountCmd/specific-port 1.3
139 TestFunctional/parallel/ServiceCmd/List 1.11
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.8
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.67
143 TestFunctional/parallel/ServiceCmd/Format 0.73
144 TestFunctional/parallel/ServiceCmd/URL 0.44
145 TestFunctional/delete_addon-resizer_images 0.13
146 TestFunctional/delete_my-image_image 0.05
147 TestFunctional/delete_minikube_cached_images 0.05
151 TestImageBuild/serial/Setup 38.45
152 TestImageBuild/serial/NormalBuild 1.32
153 TestImageBuild/serial/BuildWithBuildArg 0.66
154 TestImageBuild/serial/BuildWithDockerIgnore 0.2
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.18
158 TestIngressAddonLegacy/StartLegacyK8sCluster 73.46
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.35
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.24
165 TestJSONOutput/start/Command 58.11
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.46
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.4
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 8.16
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.72
193 TestMainNoArgs 0.06
197 TestMountStart/serial/StartWithMountFirst 16.46
198 TestMountStart/serial/VerifyMountFirst 0.29
199 TestMountStart/serial/StartWithMountSecond 16.27
200 TestMountStart/serial/VerifyMountSecond 0.27
201 TestMountStart/serial/DeleteFirst 2.42
202 TestMountStart/serial/VerifyMountPostDelete 0.26
203 TestMountStart/serial/Stop 2.2
204 TestMountStart/serial/RestartStopped 16.32
205 TestMountStart/serial/VerifyMountPostStop 0.26
208 TestMultiNode/serial/FreshStart2Nodes 97.61
209 TestMultiNode/serial/DeployApp2Nodes 4.23
210 TestMultiNode/serial/PingHostFrom2Pods 0.77
211 TestMultiNode/serial/AddNode 37.18
212 TestMultiNode/serial/ProfileList 0.19
213 TestMultiNode/serial/CopyFile 4.88
214 TestMultiNode/serial/StopNode 2.65
215 TestMultiNode/serial/StartAfterStop 29.19
216 TestMultiNode/serial/RestartKeepsNodes 124.28
217 TestMultiNode/serial/DeleteNode 2.89
218 TestMultiNode/serial/StopMultiNode 16.44
219 TestMultiNode/serial/RestartMultiNode 80.35
220 TestMultiNode/serial/ValidateNameConflict 39.41
224 TestPreload 163.52
226 TestScheduledStopUnix 103.55
227 TestSkaffold 108.86
230 TestRunningBinaryUpgrade 163.11
232 TestKubernetesUpgrade 152.51
245 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.41
246 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.22
247 TestStoppedBinaryUpgrade/Setup 0.77
248 TestStoppedBinaryUpgrade/Upgrade 149.18
250 TestPause/serial/Start 51.34
251 TestStoppedBinaryUpgrade/MinikubeLogs 2.9
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.49
261 TestNoKubernetes/serial/StartWithK8s 37.59
262 TestPause/serial/SecondStartNoReconfiguration 39
263 TestNoKubernetes/serial/StartWithStopK8s 7.49
264 TestNoKubernetes/serial/Start 14.85
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.11
266 TestNoKubernetes/serial/ProfileList 0.47
267 TestNoKubernetes/serial/Stop 2.21
268 TestPause/serial/Pause 0.64
269 TestNoKubernetes/serial/StartNoArgs 15.19
270 TestPause/serial/VerifyStatus 0.16
271 TestPause/serial/Unpause 0.51
272 TestPause/serial/PauseAgain 0.55
273 TestPause/serial/DeletePaused 5.25
274 TestPause/serial/VerifyDeletedResources 4.22
275 TestNetworkPlugins/group/auto/Start 51.89
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.11
277 TestNetworkPlugins/group/flannel/Start 66.28
278 TestNetworkPlugins/group/auto/KubeletFlags 0.13
279 TestNetworkPlugins/group/auto/NetCatPod 11.18
280 TestNetworkPlugins/group/auto/DNS 0.13
281 TestNetworkPlugins/group/auto/Localhost 0.1
282 TestNetworkPlugins/group/auto/HairPin 0.1
283 TestNetworkPlugins/group/flannel/ControllerPod 5.02
284 TestNetworkPlugins/group/flannel/KubeletFlags 0.14
285 TestNetworkPlugins/group/flannel/NetCatPod 12.17
286 TestNetworkPlugins/group/enable-default-cni/Start 87.49
287 TestNetworkPlugins/group/flannel/DNS 0.13
288 TestNetworkPlugins/group/flannel/Localhost 0.11
289 TestNetworkPlugins/group/flannel/HairPin 0.1
290 TestNetworkPlugins/group/kindnet/Start 59.11
291 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.14
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.2
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.14
295 TestNetworkPlugins/group/kindnet/NetCatPod 13.17
296 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
297 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
298 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
299 TestNetworkPlugins/group/kindnet/DNS 0.12
300 TestNetworkPlugins/group/kindnet/Localhost 0.1
301 TestNetworkPlugins/group/kindnet/HairPin 0.1
302 TestNetworkPlugins/group/bridge/Start 50.11
303 TestNetworkPlugins/group/kubenet/Start 59.42
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
305 TestNetworkPlugins/group/bridge/NetCatPod 11.17
306 TestNetworkPlugins/group/bridge/DNS 0.12
307 TestNetworkPlugins/group/bridge/Localhost 0.1
308 TestNetworkPlugins/group/bridge/HairPin 0.1
309 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
310 TestNetworkPlugins/group/kubenet/NetCatPod 12.17
311 TestNetworkPlugins/group/kubenet/DNS 0.13
312 TestNetworkPlugins/group/kubenet/Localhost 0.11
313 TestNetworkPlugins/group/kubenet/HairPin 0.1
314 TestNetworkPlugins/group/custom-flannel/Start 58.71
315 TestNetworkPlugins/group/calico/Start 68.77
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.21
318 TestNetworkPlugins/group/custom-flannel/DNS 0.14
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
321 TestNetworkPlugins/group/calico/ControllerPod 5.02
322 TestNetworkPlugins/group/calico/KubeletFlags 0.15
323 TestNetworkPlugins/group/calico/NetCatPod 11.28
324 TestNetworkPlugins/group/false/Start 88.68
325 TestNetworkPlugins/group/calico/DNS 0.13
326 TestNetworkPlugins/group/calico/Localhost 0.1
327 TestNetworkPlugins/group/calico/HairPin 0.15
329 TestStartStop/group/old-k8s-version/serial/FirstStart 140.15
330 TestNetworkPlugins/group/false/KubeletFlags 0.16
331 TestNetworkPlugins/group/false/NetCatPod 14.17
332 TestNetworkPlugins/group/false/DNS 0.12
333 TestNetworkPlugins/group/false/Localhost 0.1
334 TestNetworkPlugins/group/false/HairPin 0.1
336 TestStartStop/group/no-preload/serial/FirstStart 57.86
337 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
339 TestStartStop/group/old-k8s-version/serial/Stop 8.27
340 TestStartStop/group/no-preload/serial/DeployApp 9.27
341 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
342 TestStartStop/group/old-k8s-version/serial/SecondStart 476.55
343 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.84
344 TestStartStop/group/no-preload/serial/Stop 8.29
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
346 TestStartStop/group/no-preload/serial/SecondStart 299.86
347 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
349 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.18
350 TestStartStop/group/no-preload/serial/Pause 1.85
352 TestStartStop/group/embed-certs/serial/FirstStart 86.88
353 TestStartStop/group/embed-certs/serial/DeployApp 9.25
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
355 TestStartStop/group/embed-certs/serial/Stop 8.27
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
357 TestStartStop/group/embed-certs/serial/SecondStart 297.23
358 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
359 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
361 TestStartStop/group/old-k8s-version/serial/Pause 1.73
363 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.73
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
366 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.24
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 321.22
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
371 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.17
372 TestStartStop/group/embed-certs/serial/Pause 1.75
374 TestStartStop/group/newest-cni/serial/FirstStart 47.81
375 TestStartStop/group/newest-cni/serial/DeployApp 0
376 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
377 TestStartStop/group/newest-cni/serial/Stop 8.27
378 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
379 TestStartStop/group/newest-cni/serial/SecondStart 38.2
380 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
383 TestStartStop/group/newest-cni/serial/Pause 1.77
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.81
x
+
TestDownloadOnly/v1.16.0/json-events (11.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-677000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-677000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (11.739702207s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-677000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-677000: exit status 85 (274.345689ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-677000 | jenkins | v1.31.2 | 25 Sep 23 03:32 PDT |          |
	|         | -p download-only-677000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:32:57
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:32:57.628326    1489 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:32:57.628583    1489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:32:57.628589    1489 out.go:309] Setting ErrFile to fd 2...
	I0925 03:32:57.628593    1489 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:32:57.628760    1489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	W0925 03:32:57.628860    1489 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17297-1019/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17297-1019/.minikube/config/config.json: no such file or directory
	I0925 03:32:57.630463    1489 out.go:303] Setting JSON to true
	I0925 03:32:57.651915    1489 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":151,"bootTime":1695637826,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 03:32:57.652021    1489 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:32:57.674935    1489 out.go:97] [download-only-677000] minikube v1.31.2 on Darwin 13.6
	I0925 03:32:57.695506    1489 out.go:169] MINIKUBE_LOCATION=17297
	I0925 03:32:57.675187    1489 notify.go:220] Checking for updates...
	W0925 03:32:57.675254    1489 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball: no such file or directory
	I0925 03:32:57.737645    1489 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 03:32:57.758506    1489 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 03:32:57.779785    1489 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:32:57.801860    1489 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	W0925 03:32:57.844768    1489 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 03:32:57.845297    1489 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:32:57.938713    1489 out.go:97] Using the hyperkit driver based on user configuration
	I0925 03:32:57.938743    1489 start.go:298] selected driver: hyperkit
	I0925 03:32:57.938750    1489 start.go:902] validating driver "hyperkit" against <nil>
	I0925 03:32:57.938874    1489 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:32:57.939092    1489 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17297-1019/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0925 03:32:58.080001    1489 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0925 03:32:58.084083    1489 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:32:58.084100    1489 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0925 03:32:58.084129    1489 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0925 03:32:58.088039    1489 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0925 03:32:58.088193    1489 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0925 03:32:58.088222    1489 cni.go:84] Creating CNI manager for ""
	I0925 03:32:58.088237    1489 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0925 03:32:58.088245    1489 start_flags.go:321] config:
	{Name:download-only-677000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:32:58.088493    1489 iso.go:125] acquiring lock: {Name:mk5685b8103aa0f952a2e44c47bdd1882fdd0bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:32:58.109518    1489 out.go:97] Downloading VM boot image ...
	I0925 03:32:58.109683    1489 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I0925 03:33:02.192079    1489 out.go:97] Starting control plane node download-only-677000 in cluster download-only-677000
	I0925 03:33:02.192113    1489 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:02.270036    1489 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0925 03:33:02.270066    1489 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:02.270368    1489 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0925 03:33:02.291267    1489 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0925 03:33:02.291295    1489 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0925 03:33:02.374052    1489 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0925 03:33:07.452413    1489 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0925 03:33:07.452548    1489 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-677000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (7.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-677000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-677000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit : (7.488120824s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (7.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-677000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-677000: exit status 85 (318.223566ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-677000 | jenkins | v1.31.2 | 25 Sep 23 03:32 PDT |          |
	|         | -p download-only-677000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-677000 | jenkins | v1.31.2 | 25 Sep 23 03:33 PDT |          |
	|         | -p download-only-677000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/25 03:33:09
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0925 03:33:09.648798    1503 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:33:09.649080    1503 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:09.649085    1503 out.go:309] Setting ErrFile to fd 2...
	I0925 03:33:09.649089    1503 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:33:09.649278    1503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	W0925 03:33:09.649376    1503 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17297-1019/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17297-1019/.minikube/config/config.json: no such file or directory
	I0925 03:33:09.651058    1503 out.go:303] Setting JSON to true
	I0925 03:33:09.671853    1503 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":163,"bootTime":1695637826,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 03:33:09.671985    1503 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:33:09.693442    1503 out.go:97] [download-only-677000] minikube v1.31.2 on Darwin 13.6
	I0925 03:33:09.714477    1503 out.go:169] MINIKUBE_LOCATION=17297
	I0925 03:33:09.693600    1503 notify.go:220] Checking for updates...
	I0925 03:33:09.756339    1503 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 03:33:09.777369    1503 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 03:33:09.798373    1503 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:33:09.819370    1503 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	W0925 03:33:09.861314    1503 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0925 03:33:09.861721    1503 config.go:182] Loaded profile config "download-only-677000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0925 03:33:09.861766    1503 start.go:810] api.Load failed for download-only-677000: filestore "download-only-677000": Docker machine "download-only-677000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 03:33:09.861839    1503 driver.go:373] Setting default libvirt URI to qemu:///system
	W0925 03:33:09.861859    1503 start.go:810] api.Load failed for download-only-677000: filestore "download-only-677000": Docker machine "download-only-677000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0925 03:33:09.889301    1503 out.go:97] Using the hyperkit driver based on existing profile
	I0925 03:33:09.889330    1503 start.go:298] selected driver: hyperkit
	I0925 03:33:09.889336    1503 start.go:902] validating driver "hyperkit" against &{Name:download-only-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-677000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:09.889537    1503 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:09.889660    1503 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17297-1019/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0925 03:33:09.896887    1503 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I0925 03:33:09.900397    1503 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:33:09.900423    1503 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0925 03:33:09.902839    1503 cni.go:84] Creating CNI manager for ""
	I0925 03:33:09.902862    1503 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0925 03:33:09.902879    1503 start_flags.go:321] config:
	{Name:download-only-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-677000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:33:09.903016    1503 iso.go:125] acquiring lock: {Name:mk5685b8103aa0f952a2e44c47bdd1882fdd0bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0925 03:33:09.924342    1503 out.go:97] Starting control plane node download-only-677000 in cluster download-only-677000
	I0925 03:33:09.924362    1503 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:09.978958    1503 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I0925 03:33:09.978988    1503 cache.go:57] Caching tarball of preloaded images
	I0925 03:33:09.979337    1503 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I0925 03:33:10.000630    1503 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0925 03:33:10.000645    1503 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I0925 03:33:10.097060    1503 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/17297-1019/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-677000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-677000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestBinaryMirror (1s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-589000 --alsologtostderr --binary-mirror http://127.0.0.1:49342 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-589000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-589000
--- PASS: TestBinaryMirror (1.00s)

                                                
                                    
x
+
TestOffline (55.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-993000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-993000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (50.727461264s)
helpers_test.go:175: Cleaning up "offline-docker-993000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-993000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-993000: (5.252362852s)
--- PASS: TestOffline (55.98s)

                                                
                                    
x
+
TestAddons/Setup (144.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-313000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-313000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m24.760904699s)
--- PASS: TestAddons/Setup (144.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 9.021245ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-nghpj" [382be575-2b8c-4ca2-8d87-33f1b3dc0433] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010169449s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sg4nk" [72ae8b32-ea94-4c9f-82e4-6ad64e183600] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011006882s
addons_test.go:316: (dbg) Run:  kubectl --context addons-313000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-313000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-313000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.066814918s)
addons_test.go:335: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 ip
2023/09/25 03:35:58 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-313000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-313000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-313000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f7b6f7ed-92dc-4a45-ada7-e7b721658411] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f7b6f7ed-92dc-4a45-ada7-e7b721658411] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009563472s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-313000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p addons-313000 addons disable ingress --alsologtostderr -v=1: (7.482218266s)
--- PASS: TestAddons/parallel/Ingress (20.48s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jc46l" [14ecde39-54be-4e4a-b3af-1b234c7b3ae6] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010081491s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-313000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-313000: (5.493405714s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 5.172986ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-zljjh" [304edefd-76f4-457e-8ec2-38aee2972634] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011160361s
addons_test.go:391: (dbg) Run:  kubectl --context addons-313000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.42s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 2.834047ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-5mjdm" [a2a4d2e9-63ba-4d8e-8277-e9801c388df0] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01091265s
addons_test.go:449: (dbg) Run:  kubectl --context addons-313000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-313000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.800673867s)
addons_test.go:454: kubectl --context addons-313000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-313000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-313000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.874351366s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 4.493989ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-313000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-313000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f1c2c785-1d51-40c8-bd17-6bd68db51cce] Pending
helpers_test.go:344: "task-pv-pod" [f1c2c785-1d51-40c8-bd17-6bd68db51cce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f1c2c785-1d51-40c8-bd17-6bd68db51cce] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.01274089s
addons_test.go:560: (dbg) Run:  kubectl --context addons-313000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-313000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-313000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-313000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-313000 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-313000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-313000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-313000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9e0e73e9-d01f-4f59-9935-72d4b3900b1e] Pending
helpers_test.go:344: "task-pv-pod-restore" [9e0e73e9-d01f-4f59-9935-72d4b3900b1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9e0e73e9-d01f-4f59-9935-72d4b3900b1e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.012577179s
addons_test.go:602: (dbg) Run:  kubectl --context addons-313000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-313000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-313000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-313000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.393282674s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-313000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-313000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-313000 --alsologtostderr -v=1: (1.175902363s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-s8bql" [11844e35-df88-49e9-9b04-c4a4b43540fa] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-s8bql" [11844e35-df88-49e9-9b04-c4a4b43540fa] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.009675692s
--- PASS: TestAddons/parallel/Headlamp (13.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-7kb5j" [6aecbc62-d22c-49e8-b820-30a566234336] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008807268s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-313000
--- PASS: TestAddons/parallel/CloudSpanner (5.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-313000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-313000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-313000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-313000: (5.216889233s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-313000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-313000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-313000
--- PASS: TestAddons/StoppedEnableDisable (5.69s)

                                                
                                    
x
+
TestCertOptions (37.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-162000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-162000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.108963933s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-162000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-162000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-162000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-162000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-162000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-162000: (3.440967252s)
--- PASS: TestCertOptions (37.85s)

                                                
                                    
x
+
TestCertExpiration (242.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-130000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-130000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (34.483179282s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-130000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0925 04:07:17.624209    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-130000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (22.478975602s)
helpers_test.go:175: Cleaning up "cert-expiration-130000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-130000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-130000: (5.245036906s)
--- PASS: TestCertExpiration (242.21s)

                                                
                                    
x
+
TestDockerFlags (39.97s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-258000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-258000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (36.190221887s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-258000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-258000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-258000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-258000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-258000: (3.471388789s)
--- PASS: TestDockerFlags (39.97s)

                                                
                                    
x
+
TestForceSystemdFlag (36.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-105000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-105000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (33.079573277s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-105000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-105000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-105000: (3.431920268s)
--- PASS: TestForceSystemdFlag (36.67s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.72s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.72s)

                                                
                                    
x
+
TestErrorSpam/setup (33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-428000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-428000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 --driver=hyperkit : (32.997226866s)
--- PASS: TestErrorSpam/setup (33.00s)

                                                
                                    
x
+
TestErrorSpam/start (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 start --dry-run
--- PASS: TestErrorSpam/start (1.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.44s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 status
--- PASS: TestErrorSpam/status (0.44s)

                                                
                                    
x
+
TestErrorSpam/pause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 pause
--- PASS: TestErrorSpam/pause (1.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 unpause
--- PASS: TestErrorSpam/unpause (1.26s)

                                                
                                    
x
+
TestErrorSpam/stop (3.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 stop: (3.222628774s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-428000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-428000 stop
--- PASS: TestErrorSpam/stop (3.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17297-1019/.minikube/files/etc/test/nested/copy/1487/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.42s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-220000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (50.423298949s)
--- PASS: TestFunctional/serial/StartWithProxy (50.42s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-220000 --alsologtostderr -v=8: (39.603539682s)
functional_test.go:659: soft start took 39.604156159s for "functional-220000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-220000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:3.1: (1.528594407s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:3.3: (1.399868276s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 cache add registry.k8s.io/pause:latest: (1.36846267s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local647564415/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache add minikube-local-cache-test:functional-220000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache delete minikube-local-cache-test:functional-220000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-220000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (130.948434ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 kubectl -- --context functional-220000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-220000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.73s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0925 03:40:44.247208    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.253850    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.264677    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.284817    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.326412    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.407434    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.567660    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:44.889061    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:45.529619    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:40:46.810790    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-220000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.304988484s)
functional_test.go:757: restart took 39.305160795s for "functional-220000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-220000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 logs
E0925 03:40:49.371253    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 logs: (2.708696069s)
--- PASS: TestFunctional/serial/LogsCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3803194061/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3803194061/001/logs.txt: (2.653883191s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-220000 apply -f testdata/invalidsvc.yaml
E0925 03:40:54.521906    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-220000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-220000: exit status 115 (245.546483ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.64.4:30907 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-220000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 config get cpus: exit status 14 (41.057688ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 config get cpus: exit status 14 (65.520188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-220000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-220000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3061: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-220000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (661.417122ms)

                                                
                                                
-- stdout --
	* [functional-220000] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 03:42:01.671575    3007 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:42:01.671815    3007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:42:01.671820    3007 out.go:309] Setting ErrFile to fd 2...
	I0925 03:42:01.671824    3007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:42:01.672002    3007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 03:42:01.673424    3007 out.go:303] Setting JSON to false
	I0925 03:42:01.693659    3007 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":695,"bootTime":1695637826,"procs":504,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 03:42:01.693764    3007 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:42:01.760291    3007 out.go:177] * [functional-220000] minikube v1.31.2 on Darwin 13.6
	I0925 03:42:01.818418    3007 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:42:01.781597    3007 notify.go:220] Checking for updates...
	I0925 03:42:01.876492    3007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 03:42:01.918442    3007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 03:42:01.960325    3007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:42:02.002446    3007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 03:42:02.044133    3007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:42:02.065912    3007 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:42:02.066386    3007 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:42:02.066443    3007 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:42:02.073798    3007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50466
	I0925 03:42:02.074181    3007 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:42:02.074609    3007 main.go:141] libmachine: Using API Version  1
	I0925 03:42:02.074627    3007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:42:02.074895    3007 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:42:02.075039    3007 main.go:141] libmachine: (functional-220000) Calling .DriverName
	I0925 03:42:02.075247    3007 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:42:02.075507    3007 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:42:02.075532    3007 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:42:02.082726    3007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50470
	I0925 03:42:02.083092    3007 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:42:02.083468    3007 main.go:141] libmachine: Using API Version  1
	I0925 03:42:02.083487    3007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:42:02.083699    3007 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:42:02.083795    3007 main.go:141] libmachine: (functional-220000) Calling .DriverName
	I0925 03:42:02.111361    3007 out.go:177] * Using the hyperkit driver based on existing profile
	I0925 03:42:02.153304    3007 start.go:298] selected driver: hyperkit
	I0925 03:42:02.153318    3007 start.go:902] validating driver "hyperkit" against &{Name:functional-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:42:02.153422    3007 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:42:02.193692    3007 out.go:177] 
	W0925 03:42:02.230501    3007 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0925 03:42:02.251474    3007 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-220000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-220000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (718.599505ms)

                                                
                                                
-- stdout --
	* [functional-220000] minikube v1.31.2 sur Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 03:41:50.596686    2861 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:41:50.596923    2861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:41:50.596927    2861 out.go:309] Setting ErrFile to fd 2...
	I0925 03:41:50.596931    2861 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:41:50.597094    2861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 03:41:50.598618    2861 out.go:303] Setting JSON to false
	I0925 03:41:50.618701    2861 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":684,"bootTime":1695637826,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.6","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0925 03:41:50.618807    2861 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0925 03:41:50.642812    2861 out.go:177] * [functional-220000] minikube v1.31.2 sur Darwin 13.6
	I0925 03:41:50.705830    2861 out.go:177]   - MINIKUBE_LOCATION=17297
	I0925 03:41:50.684846    2861 notify.go:220] Checking for updates...
	I0925 03:41:50.764731    2861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	I0925 03:41:50.838868    2861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0925 03:41:50.880993    2861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0925 03:41:50.954882    2861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	I0925 03:41:51.012690    2861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0925 03:41:51.050725    2861 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:41:51.051431    2861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:41:51.051515    2861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:41:51.059370    2861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50269
	I0925 03:41:51.059728    2861 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:41:51.060169    2861 main.go:141] libmachine: Using API Version  1
	I0925 03:41:51.060180    2861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:41:51.060391    2861 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:41:51.060494    2861 main.go:141] libmachine: (functional-220000) Calling .DriverName
	I0925 03:41:51.060675    2861 driver.go:373] Setting default libvirt URI to qemu:///system
	I0925 03:41:51.060916    2861 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:41:51.060940    2861 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:41:51.067985    2861 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50271
	I0925 03:41:51.068334    2861 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:41:51.068710    2861 main.go:141] libmachine: Using API Version  1
	I0925 03:41:51.068727    2861 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:41:51.068939    2861 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:41:51.069047    2861 main.go:141] libmachine: (functional-220000) Calling .DriverName
	I0925 03:41:51.098534    2861 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0925 03:41:51.140760    2861 start.go:298] selected driver: hyperkit
	I0925 03:41:51.140776    2861 start.go:902] validating driver "hyperkit" against &{Name:functional-220000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-220000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0925 03:41:51.140870    2861 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0925 03:41:51.180579    2861 out.go:177] 
	W0925 03:41:51.218009    2861 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0925 03:41:51.239579    2861 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-220000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-220000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-bk26m" [79f9ba91-6723-4274-bede-23e9f8596ed8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-bk26m" [79f9ba91-6723-4274-bede-23e9f8596ed8] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.010568335s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.64.4:30432
functional_test.go:1674: http://192.168.64.4:30432: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-bk26m

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:30432
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (34.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e18d8dfb-b797-408e-9ccd-8fd287c98c32] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.057293897s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-220000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-220000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-220000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-220000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd8b9a16-36b7-4b8b-9c73-0a3796ed6ef9] Pending
helpers_test.go:344: "sp-pod" [cd8b9a16-36b7-4b8b-9c73-0a3796ed6ef9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0925 03:41:25.243570    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [cd8b9a16-36b7-4b8b-9c73-0a3796ed6ef9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.009789718s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-220000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-220000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-220000 delete -f testdata/storage-provisioner/pod.yaml: (1.335980693s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-220000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98adff32-9a39-43b9-b2dd-a7a3d159d313] Pending
helpers_test.go:344: "sp-pod" [98adff32-9a39-43b9-b2dd-a7a3d159d313] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [98adff32-9a39-43b9-b2dd-a7a3d159d313] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.011088257s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-220000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (34.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh -n functional-220000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 cp functional-220000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2987640055/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh -n functional-220000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-220000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-9pxst" [01967f32-0a14-4ce4-b657-df60dc77cf92] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-9pxst" [01967f32-0a14-4ce4-b657-df60dc77cf92] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.014132872s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-220000 exec mysql-859648c796-9pxst -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-220000 exec mysql-859648c796-9pxst -- mysql -ppassword -e "show databases;": exit status 1 (108.057358ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-220000 exec mysql-859648c796-9pxst -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-220000 exec mysql-859648c796-9pxst -- mysql -ppassword -e "show databases;": exit status 1 (121.281071ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-220000 exec mysql-859648c796-9pxst -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1487/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /etc/test/nested/copy/1487/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1487.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /etc/ssl/certs/1487.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1487.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /usr/share/ca-certificates/1487.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /etc/ssl/certs/14872.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14872.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /usr/share/ca-certificates/14872.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-220000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh "sudo systemctl is-active crio": exit status 1 (118.072494ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2540: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-220000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-220000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-220000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-220000 image ls --format short --alsologtostderr:
I0925 03:42:04.588645    3060 out.go:296] Setting OutFile to fd 1 ...
I0925 03:42:04.588940    3060 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.588945    3060 out.go:309] Setting ErrFile to fd 2...
I0925 03:42:04.588950    3060 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.589132    3060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
I0925 03:42:04.589764    3060 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.589871    3060 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.590247    3060 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.590297    3060 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.597364    3060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50529
I0925 03:42:04.597804    3060 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.598269    3060 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.598298    3060 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.598533    3060 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.598646    3060 main.go:141] libmachine: (functional-220000) Calling .GetState
I0925 03:42:04.598732    3060 main.go:141] libmachine: (functional-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0925 03:42:04.598804    3060 main.go:141] libmachine: (functional-220000) DBG | hyperkit pid from json: 2152
I0925 03:42:04.600310    3060 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.600339    3060 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.607694    3060 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50536
I0925 03:42:04.608024    3060 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.608411    3060 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.608430    3060 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.608629    3060 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.608729    3060 main.go:141] libmachine: (functional-220000) Calling .DriverName
I0925 03:42:04.608885    3060 ssh_runner.go:195] Run: systemctl --version
I0925 03:42:04.608905    3060 main.go:141] libmachine: (functional-220000) Calling .GetSSHHostname
I0925 03:42:04.608994    3060 main.go:141] libmachine: (functional-220000) Calling .GetSSHPort
I0925 03:42:04.609073    3060 main.go:141] libmachine: (functional-220000) Calling .GetSSHKeyPath
I0925 03:42:04.609179    3060 main.go:141] libmachine: (functional-220000) Calling .GetSSHUsername
I0925 03:42:04.609302    3060 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/functional-220000/id_rsa Username:docker}
I0925 03:42:04.675828    3060 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 03:42:04.706509    3060 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.706518    3060 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.706674    3060 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:04.706676    3060 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.706687    3060 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:04.706695    3060 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.706700    3060 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.706870    3060 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.706881    3060 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:04.706886    3060 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-220000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| docker.io/library/nginx                     | alpine            | 433dbc17191a7 | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/google-containers/addon-resizer      | functional-220000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-220000 | e058fdab2bbf9 | 30B    |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-220000 image ls --format table --alsologtostderr:
I0925 03:42:04.908789    3069 out.go:296] Setting OutFile to fd 1 ...
I0925 03:42:04.909069    3069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.909074    3069 out.go:309] Setting ErrFile to fd 2...
I0925 03:42:04.909079    3069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.909274    3069 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
I0925 03:42:04.909907    3069 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.910002    3069 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.910376    3069 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.910432    3069 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.917329    3069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50555
I0925 03:42:04.917755    3069 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.918191    3069 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.918213    3069 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.918434    3069 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.918532    3069 main.go:141] libmachine: (functional-220000) Calling .GetState
I0925 03:42:04.918630    3069 main.go:141] libmachine: (functional-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0925 03:42:04.918694    3069 main.go:141] libmachine: (functional-220000) DBG | hyperkit pid from json: 2152
I0925 03:42:04.920122    3069 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.920152    3069 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.927378    3069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50557
I0925 03:42:04.927755    3069 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.928136    3069 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.928151    3069 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.928388    3069 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.928511    3069 main.go:141] libmachine: (functional-220000) Calling .DriverName
I0925 03:42:04.928682    3069 ssh_runner.go:195] Run: systemctl --version
I0925 03:42:04.928705    3069 main.go:141] libmachine: (functional-220000) Calling .GetSSHHostname
I0925 03:42:04.928796    3069 main.go:141] libmachine: (functional-220000) Calling .GetSSHPort
I0925 03:42:04.928884    3069 main.go:141] libmachine: (functional-220000) Calling .GetSSHKeyPath
I0925 03:42:04.928967    3069 main.go:141] libmachine: (functional-220000) Calling .GetSSHUsername
I0925 03:42:04.929069    3069 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/functional-220000/id_rsa Username:docker}
I0925 03:42:04.961172    3069 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 03:42:04.979180    3069 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.979189    3069 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.979344    3069 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:04.979371    3069 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.979379    3069 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:04.979387    3069 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.979392    3069 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.979523    3069 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.979564    3069 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:04.979579    3069 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-220000 image ls --format json --alsologtostderr:
[{"id":"e058fdab2bbf96b12edf05e73798a2f91bb76c5e85b1946efb3eb28602dcb240","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-220000"],"size":"30"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-220000"],"size":"32900000
"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoD
igests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-220000 image ls --format json --alsologtostderr:
I0925 03:42:04.772746    3065 out.go:296] Setting OutFile to fd 1 ...
I0925 03:42:04.772932    3065 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.772937    3065 out.go:309] Setting ErrFile to fd 2...
I0925 03:42:04.772941    3065 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:04.773125    3065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
I0925 03:42:04.773735    3065 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.773835    3065 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:04.774189    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.774235    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.781126    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50549
I0925 03:42:04.781524    3065 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.781925    3065 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.781953    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.782193    3065 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.782302    3065 main.go:141] libmachine: (functional-220000) Calling .GetState
I0925 03:42:04.782401    3065 main.go:141] libmachine: (functional-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0925 03:42:04.782461    3065 main.go:141] libmachine: (functional-220000) DBG | hyperkit pid from json: 2152
I0925 03:42:04.783829    3065 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:04.783852    3065 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:04.790678    3065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50551
I0925 03:42:04.791009    3065 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:04.791324    3065 main.go:141] libmachine: Using API Version  1
I0925 03:42:04.791333    3065 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:04.791544    3065 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:04.791643    3065 main.go:141] libmachine: (functional-220000) Calling .DriverName
I0925 03:42:04.791786    3065 ssh_runner.go:195] Run: systemctl --version
I0925 03:42:04.791808    3065 main.go:141] libmachine: (functional-220000) Calling .GetSSHHostname
I0925 03:42:04.791876    3065 main.go:141] libmachine: (functional-220000) Calling .GetSSHPort
I0925 03:42:04.791972    3065 main.go:141] libmachine: (functional-220000) Calling .GetSSHKeyPath
I0925 03:42:04.792046    3065 main.go:141] libmachine: (functional-220000) Calling .GetSSHUsername
I0925 03:42:04.792130    3065 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/functional-220000/id_rsa Username:docker}
I0925 03:42:04.827216    3065 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 03:42:04.844070    3065 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.844082    3065 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.844241    3065 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.844250    3065 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:04.844259    3065 main.go:141] libmachine: Making call to close driver server
I0925 03:42:04.844265    3065 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:04.844287    3065 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:04.844418    3065 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:04.844427    3065 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:04.844439    3065 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-220000 image ls --format yaml --alsologtostderr:
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e058fdab2bbf96b12edf05e73798a2f91bb76c5e85b1946efb3eb28602dcb240
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-220000
size: "30"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-220000
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-220000 image ls --format yaml --alsologtostderr:
I0925 03:42:05.042911    3073 out.go:296] Setting OutFile to fd 1 ...
I0925 03:42:05.043183    3073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:05.043188    3073 out.go:309] Setting ErrFile to fd 2...
I0925 03:42:05.043192    3073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:05.043375    3073 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
I0925 03:42:05.044110    3073 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:05.044215    3073 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:05.044573    3073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:05.044626    3073 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:05.051635    3073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50561
I0925 03:42:05.052097    3073 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:05.052546    3073 main.go:141] libmachine: Using API Version  1
I0925 03:42:05.052558    3073 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:05.052826    3073 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:05.052956    3073 main.go:141] libmachine: (functional-220000) Calling .GetState
I0925 03:42:05.053054    3073 main.go:141] libmachine: (functional-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0925 03:42:05.053121    3073 main.go:141] libmachine: (functional-220000) DBG | hyperkit pid from json: 2152
I0925 03:42:05.054534    3073 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:05.054560    3073 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:05.061614    3073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50563
I0925 03:42:05.061965    3073 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:05.062312    3073 main.go:141] libmachine: Using API Version  1
I0925 03:42:05.062322    3073 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:05.062552    3073 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:05.062665    3073 main.go:141] libmachine: (functional-220000) Calling .DriverName
I0925 03:42:05.062829    3073 ssh_runner.go:195] Run: systemctl --version
I0925 03:42:05.062849    3073 main.go:141] libmachine: (functional-220000) Calling .GetSSHHostname
I0925 03:42:05.062931    3073 main.go:141] libmachine: (functional-220000) Calling .GetSSHPort
I0925 03:42:05.063010    3073 main.go:141] libmachine: (functional-220000) Calling .GetSSHKeyPath
I0925 03:42:05.063096    3073 main.go:141] libmachine: (functional-220000) Calling .GetSSHUsername
I0925 03:42:05.063202    3073 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/functional-220000/id_rsa Username:docker}
I0925 03:42:05.108797    3073 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0925 03:42:05.136056    3073 main.go:141] libmachine: Making call to close driver server
I0925 03:42:05.136064    3073 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:05.136221    3073 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:05.136249    3073 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:05.136258    3073 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:05.136266    3073 main.go:141] libmachine: Making call to close driver server
I0925 03:42:05.136271    3073 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:05.136420    3073 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:05.136427    3073 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:05.136429    3073 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh pgrep buildkitd: exit status 1 (104.66382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image build -t localhost/my-image:functional-220000 testdata/build --alsologtostderr
E0925 03:42:06.204336    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 image build -t localhost/my-image:functional-220000 testdata/build --alsologtostderr: (1.982013058s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-220000 image build -t localhost/my-image:functional-220000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7a5533a6db36
Removing intermediate container 7a5533a6db36
---> 5743804ceb18
Step 3/3 : ADD content.txt /
---> 8040d11befc5
Successfully built 8040d11befc5
Successfully tagged localhost/my-image:functional-220000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-220000 image build -t localhost/my-image:functional-220000 testdata/build --alsologtostderr:
I0925 03:42:05.321261    3082 out.go:296] Setting OutFile to fd 1 ...
I0925 03:42:05.321598    3082 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:05.321603    3082 out.go:309] Setting ErrFile to fd 2...
I0925 03:42:05.321607    3082 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0925 03:42:05.321780    3082 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
I0925 03:42:05.322398    3082 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:05.323034    3082 config.go:182] Loaded profile config "functional-220000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I0925 03:42:05.323404    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:05.323446    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:05.330109    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50574
I0925 03:42:05.330513    3082 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:05.330908    3082 main.go:141] libmachine: Using API Version  1
I0925 03:42:05.330938    3082 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:05.331169    3082 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:05.331271    3082 main.go:141] libmachine: (functional-220000) Calling .GetState
I0925 03:42:05.331349    3082 main.go:141] libmachine: (functional-220000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0925 03:42:05.331418    3082 main.go:141] libmachine: (functional-220000) DBG | hyperkit pid from json: 2152
I0925 03:42:05.332811    3082 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0925 03:42:05.332831    3082 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0925 03:42:05.339695    3082 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50576
I0925 03:42:05.340054    3082 main.go:141] libmachine: () Calling .GetVersion
I0925 03:42:05.340418    3082 main.go:141] libmachine: Using API Version  1
I0925 03:42:05.340439    3082 main.go:141] libmachine: () Calling .SetConfigRaw
I0925 03:42:05.340653    3082 main.go:141] libmachine: () Calling .GetMachineName
I0925 03:42:05.340772    3082 main.go:141] libmachine: (functional-220000) Calling .DriverName
I0925 03:42:05.340929    3082 ssh_runner.go:195] Run: systemctl --version
I0925 03:42:05.340947    3082 main.go:141] libmachine: (functional-220000) Calling .GetSSHHostname
I0925 03:42:05.341018    3082 main.go:141] libmachine: (functional-220000) Calling .GetSSHPort
I0925 03:42:05.341097    3082 main.go:141] libmachine: (functional-220000) Calling .GetSSHKeyPath
I0925 03:42:05.341191    3082 main.go:141] libmachine: (functional-220000) Calling .GetSSHUsername
I0925 03:42:05.341279    3082 sshutil.go:53] new ssh client: &{IP:192.168.64.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/functional-220000/id_rsa Username:docker}
I0925 03:42:05.375083    3082 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3045737627.tar
I0925 03:42:05.375153    3082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0925 03:42:05.382345    3082 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3045737627.tar
I0925 03:42:05.385056    3082 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3045737627.tar: stat -c "%s %y" /var/lib/minikube/build/build.3045737627.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3045737627.tar': No such file or directory
I0925 03:42:05.385091    3082 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3045737627.tar --> /var/lib/minikube/build/build.3045737627.tar (3072 bytes)
I0925 03:42:05.403615    3082 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3045737627
I0925 03:42:05.409868    3082 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3045737627 -xf /var/lib/minikube/build/build.3045737627.tar
I0925 03:42:05.415938    3082 docker.go:340] Building image: /var/lib/minikube/build/build.3045737627
I0925 03:42:05.416004    3082 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-220000 /var/lib/minikube/build/build.3045737627
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0925 03:42:07.215285    3082 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-220000 /var/lib/minikube/build/build.3045737627: (1.799242295s)
I0925 03:42:07.215345    3082 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3045737627
I0925 03:42:07.224059    3082 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3045737627.tar
I0925 03:42:07.233855    3082 build_images.go:207] Built localhost/my-image:functional-220000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3045737627.tar
I0925 03:42:07.233880    3082 build_images.go:123] succeeded building to: functional-220000
I0925 03:42:07.233885    3082 build_images.go:124] failed building to: 
I0925 03:42:07.233900    3082 main.go:141] libmachine: Making call to close driver server
I0925 03:42:07.233907    3082 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:07.234067    3082 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:07.234080    3082 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:07.234084    3082 main.go:141] libmachine: Making call to close connection to plugin binary
I0925 03:42:07.234095    3082 main.go:141] libmachine: Making call to close driver server
I0925 03:42:07.234100    3082 main.go:141] libmachine: (functional-220000) Calling .Close
I0925 03:42:07.234221    3082 main.go:141] libmachine: Successfully made call to close driver server
I0925 03:42:07.234227    3082 main.go:141] libmachine: (functional-220000) DBG | Closing plugin on server side
I0925 03:42:07.234230    3082 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
2023/09/25 03:42:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.282229316s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-220000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-220000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [facbb3ad-8104-4178-a06c-b0c1b12cc068] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [facbb3ad-8104-4178-a06c-b0c1b12cc068] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.014505258s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr: (2.832337068s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr
E0925 03:41:04.762529    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr: (1.778951431s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.947181722s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-220000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 image load --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr: (2.577508879s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-220000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.245.120 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-220000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-220000 docker-env) && out/minikube-darwin-amd64 status -p functional-220000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-220000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image save gcr.io/google-containers/addon-resizer:functional-220000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image rm gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-220000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 image save --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 image save --daemon gcr.io/google-containers/addon-resizer:functional-220000 --alsologtostderr: (1.015855069s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-220000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "217.272313ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "60.51473ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "179.769979ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "61.639666ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-220000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-220000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-h6gmt" [7082bf6d-a49a-4432-bc31-7878f334d913] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-h6gmt" [7082bf6d-a49a-4432-bc31-7878f334d913] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.009524004s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2777720070/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695638511286530000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2777720070/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695638511286530000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2777720070/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695638511286530000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2777720070/001/test-1695638511286530000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (136.497907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 25 10:41 test-1695638511286530000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh cat /mount-9p/test-1695638511286530000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-220000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ae5aaf00-a9cd-4abe-b544-38a25ea8e7c1] Pending
helpers_test.go:344: "busybox-mount" [ae5aaf00-a9cd-4abe-b544-38a25ea8e7c1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ae5aaf00-a9cd-4abe-b544-38a25ea8e7c1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ae5aaf00-a9cd-4abe-b544-38a25ea8e7c1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.012946027s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-220000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2777720070/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2184733011/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (136.786125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2184733011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh "sudo umount -f /mount-9p": exit status 1 (108.89481ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-220000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2184733011/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service list
functional_test.go:1458: (dbg) Done: out/minikube-darwin-amd64 -p functional-220000 service list: (1.112810313s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T" /mount1: exit status 1 (143.083154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-220000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-220000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1491266067/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service list -o json
functional_test.go:1493: Took "796.795678ms" to run "out/minikube-darwin-amd64 -p functional-220000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.64.4:30775
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-220000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.64.4:30775
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-220000
--- PASS: TestFunctional/delete_addon-resizer_images (0.13s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-220000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-220000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-287000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-287000 --driver=hyperkit : (38.451751249s)
--- PASS: TestImageBuild/serial/Setup (38.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-287000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-287000: (1.324445067s)
--- PASS: TestImageBuild/serial/NormalBuild (1.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-287000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.2s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-287000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-287000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (73.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-797000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E0925 03:43:28.125460    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-797000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m13.457406242s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (73.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons enable ingress --alsologtostderr -v=5: (17.349980643s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-797000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-797000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.173139865s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-797000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-797000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6f5cfd61-0a35-4999-9b1e-3c8b45e27807] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6f5cfd61-0a35-4999-9b1e-3c8b45e27807] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.011743099s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-797000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.64.6
addons_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons disable ingress-dns --alsologtostderr -v=1: (10.908116389s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-797000 addons disable ingress --alsologtostderr -v=1: (7.26396859s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-886000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0925 03:45:44.250221    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:45:59.498865    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.504254    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.516244    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.536883    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.579038    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.659298    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:45:59.819542    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:00.139894    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:00.782362    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:02.062647    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:04.624466    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:09.750126    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:46:11.972444    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 03:46:19.994712    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-886000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (58.108703504s)
--- PASS: TestJSONOutput/start/Command (58.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-886000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-886000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-886000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-886000 --output=json --user=testUser: (8.162545133s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.72s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-698000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-698000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (369.532749ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83c14dc1-3e4d-44e2-b7e5-201559e141f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-698000] minikube v1.31.2 on Darwin 13.6","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c14dc568-1c6b-4b88-b236-fe64bf169aab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17297"}}
	{"specversion":"1.0","id":"1573dcb9-16cc-46d3-a721-5878e71248f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig"}}
	{"specversion":"1.0","id":"3b53cef1-f0c0-457b-8934-90bcc02e5ff9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a385c15b-0a90-41f1-b6b7-eb6dd18159ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"118507ed-ed99-4ce3-b878-35d0909d22aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube"}}
	{"specversion":"1.0","id":"af55ba2a-3b7d-4c9e-a950-6e46f0f1d2db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"371b73ea-e5a1-4d7f-aa84-a7d764ca5ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-698000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-698000
--- PASS: TestErrorJSONOutput (0.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.455899803s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-720000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-720000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-729000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-729000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.264748004s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.42s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-720000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-720000 --alsologtostderr -v=5: (2.419474509s)
--- PASS: TestMountStart/serial/DeleteFirst (2.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-729000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-729000: (2.204403114s)
--- PASS: TestMountStart/serial/Stop (2.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-729000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-729000: (15.318105509s)
--- PASS: TestMountStart/serial/RestartStopped (16.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-729000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-454000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0925 03:48:43.364996    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 03:49:38.680900    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:38.686869    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:38.697600    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:38.717910    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:38.758278    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:38.839163    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:39.001173    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:39.323207    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:39.964255    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:41.245019    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:43.805638    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:48.926254    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:49:59.166523    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-454000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m37.374111786s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-454000 -- rollout status deployment/busybox: (2.675908711s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-hdn97 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-xg8mj -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-hdn97 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-xg8mj -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-hdn97 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-xg8mj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-hdn97 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-hdn97 -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-xg8mj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-454000 -- exec busybox-5bc68d56bd-xg8mj -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-454000 -v 3 --alsologtostderr
E0925 03:50:19.649051    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:50:44.267403    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-454000 -v 3 --alsologtostderr: (36.879647057s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp testdata/cp-test.txt multinode-454000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2315398067/001/cp-test_multinode-454000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000:/home/docker/cp-test.txt multinode-454000-m02:/home/docker/cp-test_multinode-454000_multinode-454000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test_multinode-454000_multinode-454000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000:/home/docker/cp-test.txt multinode-454000-m03:/home/docker/cp-test_multinode-454000_multinode-454000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test_multinode-454000_multinode-454000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp testdata/cp-test.txt multinode-454000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2315398067/001/cp-test_multinode-454000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m02:/home/docker/cp-test.txt multinode-454000:/home/docker/cp-test_multinode-454000-m02_multinode-454000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test_multinode-454000-m02_multinode-454000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m02:/home/docker/cp-test.txt multinode-454000-m03:/home/docker/cp-test_multinode-454000-m02_multinode-454000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test.txt"
E0925 03:50:59.516478    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test_multinode-454000-m02_multinode-454000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp testdata/cp-test.txt multinode-454000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2315398067/001/cp-test_multinode-454000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m03:/home/docker/cp-test.txt multinode-454000:/home/docker/cp-test_multinode-454000-m03_multinode-454000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000 "sudo cat /home/docker/cp-test_multinode-454000-m03_multinode-454000.txt"
E0925 03:51:00.610792    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 cp multinode-454000-m03:/home/docker/cp-test.txt multinode-454000-m02:/home/docker/cp-test_multinode-454000-m03_multinode-454000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 ssh -n multinode-454000-m02 "sudo cat /home/docker/cp-test_multinode-454000-m03_multinode-454000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-454000 node stop m03: (2.172314328s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-454000 status: exit status 7 (238.893333ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr: exit status 7 (240.103578ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-454000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-454000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 03:51:03.649613    4045 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:51:03.650189    4045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:51:03.650197    4045 out.go:309] Setting ErrFile to fd 2...
	I0925 03:51:03.650201    4045 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:51:03.650611    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 03:51:03.651062    4045 out.go:303] Setting JSON to false
	I0925 03:51:03.651088    4045 mustload.go:65] Loading cluster: multinode-454000
	I0925 03:51:03.651125    4045 notify.go:220] Checking for updates...
	I0925 03:51:03.651384    4045 config.go:182] Loaded profile config "multinode-454000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:51:03.651397    4045 status.go:255] checking status of multinode-454000 ...
	I0925 03:51:03.651776    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.651815    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.658845    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51536
	I0925 03:51:03.659226    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.659654    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.659666    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.659929    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.660056    4045 main.go:141] libmachine: (multinode-454000) Calling .GetState
	I0925 03:51:03.660160    4045 main.go:141] libmachine: (multinode-454000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:51:03.660228    4045 main.go:141] libmachine: (multinode-454000) DBG | hyperkit pid from json: 3734
	I0925 03:51:03.661429    4045 status.go:330] multinode-454000 host status = "Running" (err=<nil>)
	I0925 03:51:03.661449    4045 host.go:66] Checking if "multinode-454000" exists ...
	I0925 03:51:03.661709    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.661729    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.668630    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51538
	I0925 03:51:03.668987    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.669323    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.669339    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.669555    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.669657    4045 main.go:141] libmachine: (multinode-454000) Calling .GetIP
	I0925 03:51:03.669749    4045 host.go:66] Checking if "multinode-454000" exists ...
	I0925 03:51:03.670004    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.670045    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.679568    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51540
	I0925 03:51:03.679916    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.680252    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.680262    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.680459    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.680546    4045 main.go:141] libmachine: (multinode-454000) Calling .DriverName
	I0925 03:51:03.680672    4045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 03:51:03.680694    4045 main.go:141] libmachine: (multinode-454000) Calling .GetSSHHostname
	I0925 03:51:03.680760    4045 main.go:141] libmachine: (multinode-454000) Calling .GetSSHPort
	I0925 03:51:03.680850    4045 main.go:141] libmachine: (multinode-454000) Calling .GetSSHKeyPath
	I0925 03:51:03.680967    4045 main.go:141] libmachine: (multinode-454000) Calling .GetSSHUsername
	I0925 03:51:03.681063    4045 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/multinode-454000/id_rsa Username:docker}
	I0925 03:51:03.726756    4045 ssh_runner.go:195] Run: systemctl --version
	I0925 03:51:03.730442    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:51:03.740143    4045 kubeconfig.go:92] found "multinode-454000" server: "https://192.168.64.12:8443"
	I0925 03:51:03.740164    4045 api_server.go:166] Checking apiserver status ...
	I0925 03:51:03.740205    4045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0925 03:51:03.748664    4045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2005/cgroup
	I0925 03:51:03.754552    4045 api_server.go:182] apiserver freezer: "6:freezer:/kubepods/burstable/podc3740f065eb2ee0d57e354e7ba3191dd/85ed2a8a3b33dd141220643c5e601eb8ca0fa0d1df16eeb2841b02ea2ffcb076"
	I0925 03:51:03.754598    4045 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc3740f065eb2ee0d57e354e7ba3191dd/85ed2a8a3b33dd141220643c5e601eb8ca0fa0d1df16eeb2841b02ea2ffcb076/freezer.state
	I0925 03:51:03.760495    4045 api_server.go:204] freezer state: "THAWED"
	I0925 03:51:03.760507    4045 api_server.go:253] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0925 03:51:03.763827    4045 api_server.go:279] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0925 03:51:03.763837    4045 status.go:421] multinode-454000 apiserver status = Running (err=<nil>)
	I0925 03:51:03.763846    4045 status.go:257] multinode-454000 status: &{Name:multinode-454000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 03:51:03.763857    4045 status.go:255] checking status of multinode-454000-m02 ...
	I0925 03:51:03.764109    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.764130    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.771185    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51544
	I0925 03:51:03.771543    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.771903    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.771918    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.772133    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.772248    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetState
	I0925 03:51:03.772334    4045 main.go:141] libmachine: (multinode-454000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:51:03.772396    4045 main.go:141] libmachine: (multinode-454000-m02) DBG | hyperkit pid from json: 3763
	I0925 03:51:03.773595    4045 status.go:330] multinode-454000-m02 host status = "Running" (err=<nil>)
	I0925 03:51:03.773604    4045 host.go:66] Checking if "multinode-454000-m02" exists ...
	I0925 03:51:03.773862    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.773886    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.780790    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51546
	I0925 03:51:03.781148    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.781474    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.781484    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.781675    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.781780    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetIP
	I0925 03:51:03.781861    4045 host.go:66] Checking if "multinode-454000-m02" exists ...
	I0925 03:51:03.782107    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.782130    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.789062    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51548
	I0925 03:51:03.789402    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.789720    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.789734    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.789929    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.790022    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .DriverName
	I0925 03:51:03.790150    4045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0925 03:51:03.790163    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetSSHHostname
	I0925 03:51:03.790237    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetSSHPort
	I0925 03:51:03.790313    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetSSHKeyPath
	I0925 03:51:03.790412    4045 main.go:141] libmachine: (multinode-454000-m02) Calling .GetSSHUsername
	I0925 03:51:03.790478    4045 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17297-1019/.minikube/machines/multinode-454000-m02/id_rsa Username:docker}
	I0925 03:51:03.831621    4045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0925 03:51:03.840710    4045 status.go:257] multinode-454000-m02 status: &{Name:multinode-454000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0925 03:51:03.840734    4045 status.go:255] checking status of multinode-454000-m03 ...
	I0925 03:51:03.841001    4045 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:51:03.841024    4045 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:51:03.848263    4045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51551
	I0925 03:51:03.848620    4045 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:51:03.848968    4045 main.go:141] libmachine: Using API Version  1
	I0925 03:51:03.848980    4045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:51:03.849186    4045 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:51:03.849294    4045 main.go:141] libmachine: (multinode-454000-m03) Calling .GetState
	I0925 03:51:03.849375    4045 main.go:141] libmachine: (multinode-454000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:51:03.849443    4045 main.go:141] libmachine: (multinode-454000-m03) DBG | hyperkit pid from json: 3839
	I0925 03:51:03.850612    4045 main.go:141] libmachine: (multinode-454000-m03) DBG | hyperkit pid 3839 missing from process table
	I0925 03:51:03.850631    4045 status.go:330] multinode-454000-m03 host status = "Stopped" (err=<nil>)
	I0925 03:51:03.850637    4045 status.go:343] host is not running, skipping remaining checks
	I0925 03:51:03.850643    4045 status.go:257] multinode-454000-m03 status: &{Name:multinode-454000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.65s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 node start m03 --alsologtostderr
E0925 03:51:27.207511    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-454000 node start m03 --alsologtostderr: (28.841219961s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-454000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-454000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-454000: (18.386546311s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr
E0925 03:52:22.532192    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr: (1m45.810799318s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-454000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-454000 node delete m03: (2.586323017s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-454000 stop: (16.324847941s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-454000 status: exit status 7 (58.947639ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr: exit status 7 (58.933974ms)

                                                
                                                
-- stdout --
	multinode-454000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-454000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0925 03:53:56.637159    4186 out.go:296] Setting OutFile to fd 1 ...
	I0925 03:53:56.637411    4186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:53:56.637416    4186 out.go:309] Setting ErrFile to fd 2...
	I0925 03:53:56.637420    4186 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0925 03:53:56.637603    4186 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17297-1019/.minikube/bin
	I0925 03:53:56.637803    4186 out.go:303] Setting JSON to false
	I0925 03:53:56.637825    4186 mustload.go:65] Loading cluster: multinode-454000
	I0925 03:53:56.637861    4186 notify.go:220] Checking for updates...
	I0925 03:53:56.638135    4186 config.go:182] Loaded profile config "multinode-454000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I0925 03:53:56.638146    4186 status.go:255] checking status of multinode-454000 ...
	I0925 03:53:56.638511    4186 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:53:56.638579    4186 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:53:56.645374    4186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51732
	I0925 03:53:56.645692    4186 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:53:56.646090    4186 main.go:141] libmachine: Using API Version  1
	I0925 03:53:56.646101    4186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:53:56.646326    4186 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:53:56.646442    4186 main.go:141] libmachine: (multinode-454000) Calling .GetState
	I0925 03:53:56.646519    4186 main.go:141] libmachine: (multinode-454000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:53:56.646587    4186 main.go:141] libmachine: (multinode-454000) DBG | hyperkit pid from json: 4113
	I0925 03:53:56.647552    4186 main.go:141] libmachine: (multinode-454000) DBG | hyperkit pid 4113 missing from process table
	I0925 03:53:56.647608    4186 status.go:330] multinode-454000 host status = "Stopped" (err=<nil>)
	I0925 03:53:56.647620    4186 status.go:343] host is not running, skipping remaining checks
	I0925 03:53:56.647625    4186 status.go:257] multinode-454000 status: &{Name:multinode-454000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0925 03:53:56.647649    4186 status.go:255] checking status of multinode-454000-m02 ...
	I0925 03:53:56.647913    4186 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0925 03:53:56.647935    4186 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0925 03:53:56.654763    4186 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51734
	I0925 03:53:56.655078    4186 main.go:141] libmachine: () Calling .GetVersion
	I0925 03:53:56.655394    4186 main.go:141] libmachine: Using API Version  1
	I0925 03:53:56.655409    4186 main.go:141] libmachine: () Calling .SetConfigRaw
	I0925 03:53:56.655626    4186 main.go:141] libmachine: () Calling .GetMachineName
	I0925 03:53:56.655730    4186 main.go:141] libmachine: (multinode-454000-m02) Calling .GetState
	I0925 03:53:56.655809    4186 main.go:141] libmachine: (multinode-454000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0925 03:53:56.655868    4186 main.go:141] libmachine: (multinode-454000-m02) DBG | hyperkit pid from json: 4133
	I0925 03:53:56.656777    4186 main.go:141] libmachine: (multinode-454000-m02) DBG | hyperkit pid 4133 missing from process table
	I0925 03:53:56.656807    4186 status.go:330] multinode-454000-m02 host status = "Stopped" (err=<nil>)
	I0925 03:53:56.656816    4186 status.go:343] host is not running, skipping remaining checks
	I0925 03:53:56.656822    4186 status.go:257] multinode-454000-m02 status: &{Name:multinode-454000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0925 03:54:38.684655    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 03:55:06.375856    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-454000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m20.040551846s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-454000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-454000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-454000-m02 --driver=hyperkit 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-454000-m02 --driver=hyperkit : exit status 14 (399.660447ms)

                                                
                                                
-- stdout --
	* [multinode-454000-m02] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-454000-m02' is duplicated with machine name 'multinode-454000-m02' in profile 'multinode-454000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-454000-m03 --driver=hyperkit 
E0925 03:55:44.270776    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-454000-m03 --driver=hyperkit : (35.29404117s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-454000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-454000: exit status 80 (244.302853ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-454000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-454000-m03 already exists in multinode-454000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-454000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-454000-m03: (3.427424319s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.41s)

                                                
                                    
x
+
TestPreload (163.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-932000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0925 03:57:07.350121    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-932000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m8.810882096s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-932000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-932000 image pull gcr.io/k8s-minikube/busybox: (1.19638868s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-932000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-932000: (8.225019971s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-932000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-932000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m19.898855848s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-932000 image list
helpers_test.go:175: Cleaning up "test-preload-932000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-932000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-932000: (5.246604018s)
--- PASS: TestPreload (163.52s)

                                                
                                    
x
+
TestScheduledStopUnix (103.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-037000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-037000 --memory=2048 --driver=hyperkit : (32.279787009s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-037000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-037000 -n scheduled-stop-037000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-037000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-037000 --cancel-scheduled
E0925 03:59:38.689939    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-037000 -n scheduled-stop-037000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-037000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-037000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-037000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-037000: exit status 7 (52.36154ms)

                                                
                                                
-- stdout --
	scheduled-stop-037000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-037000 -n scheduled-stop-037000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-037000 -n scheduled-stop-037000: exit status 7 (51.20294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-037000
--- PASS: TestScheduledStopUnix (103.55s)

                                                
                                    
x
+
TestSkaffold (108.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2871379462 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-238000 --memory=2600 --driver=hyperkit 
E0925 04:00:44.274987    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:00:59.523461    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-238000 --memory=2600 --driver=hyperkit : (35.23842589s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2871379462 run --minikube-profile skaffold-238000 --kube-context skaffold-238000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2871379462 run --minikube-profile skaffold-238000 --kube-context skaffold-238000 --status-check=true --port-forward=false --interactive=false: (56.231396784s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6fb668f486-7wvwc" [fddb200c-697a-4de9-b941-e3494c42b609] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012885364s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-57dfddcbff-pf7n4" [0c9bc59b-e575-4aec-8e5a-f503fa0f9499] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007938893s
helpers_test.go:175: Cleaning up "skaffold-238000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-238000
E0925 04:02:22.526131    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-238000: (5.246111956s)
--- PASS: TestSkaffold (108.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (163.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3337952715.exe start -p running-upgrade-180000 --memory=2200 --vm-driver=hyperkit 
E0925 04:04:38.640412    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 04:05:44.225225    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:05:59.475284    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 04:06:01.691372    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3337952715.exe start -p running-upgrade-180000 --memory=2200 --vm-driver=hyperkit : (1m31.457148243s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0925 04:07:07.376516    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.381636    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.392355    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.414445    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.455162    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.536808    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:07.698516    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:08.019025    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:08.661211    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:09.942629    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:07:12.502870    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-180000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m5.552307014s)
helpers_test.go:175: Cleaning up "running-upgrade-180000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-180000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-180000: (5.316954651s)
--- PASS: TestRunningBinaryUpgrade (163.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (152.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E0925 04:07:27.866086    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m12.546071376s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-542000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-542000: (8.229640652s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-542000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-542000 status --format={{.Host}}: exit status 7 (50.521428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (32.227295848s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-542000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (366.637367ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-542000] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-542000
	    minikube start -p kubernetes-upgrade-542000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5420002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-542000 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-542000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (33.802395066s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-542000
E0925 04:09:51.229937    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-542000: (5.246839869s)
--- PASS: TestKubernetesUpgrade (152.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.41s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17297
- KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1140914030/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1140914030/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1140914030/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1140914030/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.22s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17297
- KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2979304694/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2979304694/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2979304694/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2979304694/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1743080316.exe start -p stopped-upgrade-941000 --memory=2200 --vm-driver=hyperkit 
E0925 04:07:48.347551    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:08:29.308126    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1743080316.exe start -p stopped-upgrade-941000 --memory=2200 --vm-driver=hyperkit : (1m22.355554311s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1743080316.exe -p stopped-upgrade-941000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1743080316.exe -p stopped-upgrade-941000 stop: (8.073150926s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-941000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0925 04:09:38.641057    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-941000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (58.745893346s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.18s)

                                                
                                    
x
+
TestPause/serial/Start (51.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-729000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-729000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (51.335994848s)
--- PASS: TestPause/serial/Start (51.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-941000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-941000: (2.895392307s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (488.190983ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-733000] minikube v1.31.2 on Darwin 13.6
	  - MINIKUBE_LOCATION=17297
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17297-1019/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17297-1019/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-733000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-733000 --driver=hyperkit : (37.437702356s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-733000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-729000 --alsologtostderr -v=1 --driver=hyperkit 
E0925 04:10:44.226070    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-729000 --alsologtostderr -v=1 --driver=hyperkit : (38.979814077s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --driver=hyperkit 
E0925 04:10:59.476042    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --driver=hyperkit : (4.888343501s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-733000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-733000 status -o json: exit status 2 (133.54033ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-733000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-733000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-733000: (2.468121215s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (14.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-733000 --no-kubernetes --driver=hyperkit : (14.853522028s)
--- PASS: TestNoKubernetes/serial/Start (14.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (108.564347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-733000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-733000: (2.214628163s)
--- PASS: TestNoKubernetes/serial/Stop (2.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-729000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-733000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-733000 --driver=hyperkit : (15.190222862s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-729000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-729000 --output=json --layout=cluster: exit status 2 (158.977026ms)

                                                
                                                
-- stdout --
	{"Name":"pause-729000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-729000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-729000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-729000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-729000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-729000 --alsologtostderr -v=5: (5.246880352s)
--- PASS: TestPause/serial/DeletePaused (5.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (4.221842524s)
--- PASS: TestPause/serial/VerifyDeletedResources (4.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (51.894141327s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-733000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (111.801533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0925 04:12:07.377306    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m6.281924247s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7twb5" [113915af-63b4-4199-94f7-7650d7da73ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7twb5" [113915af-63b4-4199-94f7-7650d7da73ea] Running
E0925 04:12:35.071382    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009389627s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-szf9p" [053d6b08-171b-4e5e-8ba4-1575bc7ad4a7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.01404161s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k52ft" [972aae5d-95f9-4b06-ba7b-e6a00ec34ef9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k52ft" [972aae5d-95f9-4b06-ba7b-e6a00ec34ef9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.00969337s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (1m27.486999669s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0925 04:13:47.306306    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (59.105890672s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cznxr" [7de3d8a3-5fb3-4d06-821e-4624b7359ef0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01333335s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-th47d" [222694f7-d444-4909-9fcf-7f9ece5c1369] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-th47d" [222694f7-d444-4909-9fcf-7f9ece5c1369] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.009271156s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9wq95" [5867eb15-348c-417d-87c9-fc195241e2b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9wq95" [5867eb15-348c-417d-87c9-fc195241e2b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.009196082s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (50.109357909s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (59.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (59.417852534s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (59.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q5kvh" [7adc3fb6-ff71-4464-a27d-de3c9b39a7c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 04:15:44.228405    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-q5kvh" [7adc3fb6-ff71-4464-a27d-de3c9b39a7c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008053345s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pgpjv" [cf820a9a-b2bc-4eb3-9970-c47797dec9eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 04:15:59.475522    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-pgpjv" [cf820a9a-b2bc-4eb3-9970-c47797dec9eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.007942546s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (58.70631832s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0925 04:17:07.379607    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m8.766500832s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cxqzp" [ff34c142-9422-43fb-81d6-feaed5774850] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cxqzp" [ff34c142-9422-43fb-81d6-feaed5774850] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.00697945s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zcg95" [a2837860-2d18-4291-b418-0e7179897a7f] Running
E0925 04:17:36.512705    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014893052s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hcbgb" [67afb590-17d1-4851-bcf8-5e1592292f88] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hcbgb" [67afb590-17d1-4851-bcf8-5e1592292f88] Running
E0925 04:17:46.386343    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.392180    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.404198    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.424945    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.466224    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.547417    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.707762    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:46.754039    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:17:47.028364    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:47.669186    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:17:48.949348    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.008523036s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (88.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-803000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m28.681470653s)
--- PASS: TestNetworkPlugins/group/false/Start (88.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-596000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0925 04:18:27.352797    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:18:48.197227    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:19:02.529765    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 04:19:08.313180    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-596000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m20.147818167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-803000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-803000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lzmtw" [d2eccb88-e0f0-42ae-894f-1de392a0224e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0925 04:19:20.425926    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.432174    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.442277    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.464207    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.504333    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.584812    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:20.744919    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:21.066683    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lzmtw" [d2eccb88-e0f0-42ae-894f-1de392a0224e] Running
E0925 04:19:21.708066    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:21.709634    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:21.714760    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:21.724895    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:21.746421    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:21.786872    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:21.868562    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:22.028884    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:22.349537    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:22.988332    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:19:22.990081    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:24.271002    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:19:25.549106    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.007865714s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-803000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-803000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)
E0925 04:35:28.793325    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-821000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:20:01.393421    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:20:02.675816    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:20:10.117963    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-821000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (57.856383231s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-596000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6268aea-7cd6-4cd9-a56b-69bf7a0cfd85] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0925 04:20:30.235551    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [c6268aea-7cd6-4cd9-a56b-69bf7a0cfd85] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.016765915s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-596000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-596000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-596000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-596000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-596000 --alsologtostderr -v=3: (8.272533472s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-821000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3fc345a3-8000-450e-b9b3-61de538246f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0925 04:20:42.353748    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:20:43.371458    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.377114    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.387936    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.408356    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.450503    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.530818    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:43.636149    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:20:43.691037    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:44.011883    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:44.228899    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3fc345a3-8000-450e-b9b3-61de538246f0] Running
E0925 04:20:44.653484    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:45.935519    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015171737s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-821000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-596000 -n old-k8s-version-596000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-596000 -n old-k8s-version-596000: exit status 7 (50.203103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-596000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (476.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-596000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0925 04:20:48.495725    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-596000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m56.391754061s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-596000 -n old-k8s-version-596000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (476.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-821000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-821000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-821000 --alsologtostderr -v=3
E0925 04:20:53.615940    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:20:56.089507    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.095342    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.106841    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.127959    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.169869    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.250102    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.410499    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:56.731540    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:57.373936    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:58.654414    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:20:59.477099    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-821000 --alsologtostderr -v=3: (8.289706099s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-821000 -n no-preload-821000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-821000 -n no-preload-821000: exit status 7 (49.766898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-821000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (299.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-821000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:21:01.216172    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:21:03.857398    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:21:06.336359    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:21:16.577685    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:21:24.339727    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:21:37.058477    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:22:04.275085    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:22:05.303634    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:22:05.557151    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:22:07.380115    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:22:10.390867    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.396115    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.406955    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.427420    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.469148    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.549384    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:10.709682    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:11.031304    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:11.671589    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:12.953637    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:15.513831    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:18.019181    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:22:20.635745    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:26.269985    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:22:30.877001    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:34.075909    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.082048    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.092216    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.114354    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.155975    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.237627    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.398583    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:34.719132    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:35.360678    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:36.641828    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:39.202756    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:41.695511    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 04:22:44.324019    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:22:46.387150    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:22:51.357878    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:22:53.959092    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:22:54.626664    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:23:14.076591    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:23:15.108763    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:23:27.225585    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:23:30.434736    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:23:32.319674    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:23:39.941243    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:23:56.069023    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:24:12.360877    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.367141    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.377282    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.398121    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.440214    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.520425    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:12.681985    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:13.003290    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:13.643886    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:14.925054    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:17.486073    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:20.426032    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:24:21.712428    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:24:22.607512    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:32.847912    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:38.643644    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 04:24:48.116032    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:24:49.399012    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:24:53.328803    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:24:54.240968    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:25:17.989963    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:25:34.289089    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:25:43.371619    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:25:44.229767    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:25:56.090618    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:25:59.477762    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-821000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (4m59.71627385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-821000 -n no-preload-821000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (299.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-m7hc7" [fa075494-e30e-45f9-9a38-ba3846a55afb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013982176s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-m7hc7" [fa075494-e30e-45f9-9a38-ba3846a55afb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008524423s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-821000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-821000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-821000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-821000 -n no-preload-821000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-821000 -n no-preload-821000: exit status 2 (140.526485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-821000 -n no-preload-821000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-821000 -n no-preload-821000: exit status 2 (141.54866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-821000 --alsologtostderr -v=1
E0925 04:26:11.066767    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-821000 -n no-preload-821000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-821000 -n no-preload-821000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-952000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:26:23.782793    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:26:56.210249    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:27:07.380616    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:27:10.392150    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:27:26.272216    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:27:34.076144    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:27:38.081927    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-952000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (1m26.87975046s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-952000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a96487d3-e0da-4407-9252-5c7a27816511] Pending
helpers_test.go:344: "busybox" [a96487d3-e0da-4407-9252-5c7a27816511] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0925 04:27:46.388709    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [a96487d3-e0da-4407-9252-5c7a27816511] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.019315187s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-952000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-952000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-952000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-952000 --alsologtostderr -v=3
E0925 04:28:01.831125    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-952000 --alsologtostderr -v=3: (8.27065247s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-952000 -n embed-certs-952000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-952000 -n embed-certs-952000: exit status 7 (50.139159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-952000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-952000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-952000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (4m57.080810865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-952000 -n embed-certs-952000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-h2hm9" [47e68605-826a-4bf6-8f58-e49bb6b880fd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01502042s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-h2hm9" [47e68605-826a-4bf6-8f58-e49bb6b880fd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006015078s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-596000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-596000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-596000 -n old-k8s-version-596000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-596000 -n old-k8s-version-596000: exit status 2 (147.957753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-596000 -n old-k8s-version-596000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-596000 -n old-k8s-version-596000: exit status 2 (143.728856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-596000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-596000 -n old-k8s-version-596000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-596000 -n old-k8s-version-596000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:29:12.363050    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
E0925 04:29:20.428781    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:29:21.712451    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:29:38.644644    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
E0925 04:29:40.051619    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (52.727412579s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4e7309e-5f55-47dc-9c1b-e82b001b2b67] Pending
helpers_test.go:344: "busybox" [f4e7309e-5f55-47dc-9c1b-e82b001b2b67] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4e7309e-5f55-47dc-9c1b-e82b001b2b67] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.016363688s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-546000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-546000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-546000 --alsologtostderr -v=3: (8.237360697s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 7 (51.054717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-546000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:30:27.310882    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:30:28.797589    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:28.802873    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:28.813417    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:28.834407    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:28.874563    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:28.956527    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:29.116989    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:29.438009    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:30.080304    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:31.361424    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:33.921968    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:39.043542    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:41.485197    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.491164    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.501739    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.522089    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.563050    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.644604    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:41.806077    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:42.127134    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:42.767275    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:43.373844    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:30:44.047418    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:44.229860    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:30:46.607847    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:49.284315    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:30:51.728968    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:30:56.090558    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kubenet-803000/client.crt: no such file or directory
E0925 04:30:59.478453    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 04:31:01.969320    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:31:09.765325    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:31:22.449802    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:31:50.723093    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/old-k8s-version-596000/client.crt: no such file or directory
E0925 04:32:03.406155    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:32:07.377035    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/skaffold-238000/client.crt: no such file or directory
E0925 04:32:10.389233    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/custom-flannel-803000/client.crt: no such file or directory
E0925 04:32:26.266422    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
E0925 04:32:34.072201    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/calico-803000/client.crt: no such file or directory
E0925 04:32:46.383465    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-546000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (5m21.07390899s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (321.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-86wh5" [5f5e25e8-1f28-4de2-b525-2bf0ebae78d4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010871511s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-86wh5" [5f5e25e8-1f28-4de2-b525-2bf0ebae78d4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00689791s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-952000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-952000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-952000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-952000 -n embed-certs-952000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-952000 -n embed-certs-952000: exit status 2 (139.110925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-952000 -n embed-certs-952000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-952000 -n embed-certs-952000: exit status 2 (139.623844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-952000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-952000 -n embed-certs-952000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-952000 -n embed-certs-952000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-156000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:33:25.326522    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:33:49.315681    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/auto-803000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-156000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (47.812032231s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-156000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-156000 --alsologtostderr -v=3
E0925 04:34:09.434962    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/flannel-803000/client.crt: no such file or directory
E0925 04:34:12.357135    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/false-803000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-156000 --alsologtostderr -v=3: (8.271855488s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-156000 -n newest-cni-156000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-156000 -n newest-cni-156000: exit status 7 (49.534851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-156000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-156000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E0925 04:34:20.423282    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:34:21.706549    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
E0925 04:34:38.640816    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/ingress-addon-legacy-797000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-156000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (38.026377943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-156000 -n newest-cni-156000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-156000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-156000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-156000 -n newest-cni-156000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-156000 -n newest-cni-156000: exit status 2 (150.176965ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-156000 -n newest-cni-156000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-156000 -n newest-cni-156000: exit status 2 (144.848533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-156000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-156000 -n newest-cni-156000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-156000 -n newest-cni-156000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mmmtk" [c18e448d-4573-4da9-9d46-bcd4d2e3851e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013823692s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-mmmtk" [c18e448d-4573-4da9-9d46-bcd4d2e3851e] Running
E0925 04:35:41.478753    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/no-preload-821000/client.crt: no such file or directory
E0925 04:35:42.526993    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/functional-220000/client.crt: no such file or directory
E0925 04:35:43.368344    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/bridge-803000/client.crt: no such file or directory
E0925 04:35:43.473697    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/kindnet-803000/client.crt: no such file or directory
E0925 04:35:44.225287    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/addons-313000/client.crt: no such file or directory
E0925 04:35:44.756010    1487 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17297-1019/.minikube/profiles/enable-default-cni-803000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007134898s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-546000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-546000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-546000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 2 (145.997822ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000: exit status 2 (140.188227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-546000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-546000 -n default-k8s-diff-port-546000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.81s)

                                                
                                    

Test skip (20/318)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-803000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-803000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-803000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-803000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-803000"

                                                
                                                
----------------------- debugLogs end: cilium-803000 [took: 5.008159356s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-803000
--- SKIP: TestNetworkPlugins/group/cilium (5.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-732000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-732000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.39s)

                                                
                                    
Copied to clipboard