Test Report: Hyperkit_macOS 17491

                    
                      b9c6c6ec15a37d1e4d613f5544f316161403a793:2023-10-25:31608
                    
                

Test fail (2/322)

Order failed test Duration
235 TestRunningBinaryUpgrade 107.82
365 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.76
x
+
TestRunningBinaryUpgrade (107.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1875133465.exe start -p running-upgrade-961000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:133: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1875133465.exe start -p running-upgrade-961000 --memory=2200 --vm-driver=hyperkit : (1m30.002697629s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (15.353090638s)

                                                
                                                
-- stdout --
	* [running-upgrade-961000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the hyperkit driver based on existing profile
	* Starting control plane node running-upgrade-961000 in cluster running-upgrade-961000
	* Updating the running hyperkit "running-upgrade-961000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 19:21:10.599491   81271 out.go:296] Setting OutFile to fd 1 ...
	I1025 19:21:10.599780   81271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:21:10.599786   81271 out.go:309] Setting ErrFile to fd 2...
	I1025 19:21:10.599790   81271 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:21:10.599964   81271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 19:21:10.601499   81271 out.go:303] Setting JSON to false
	I1025 19:21:10.625745   81271 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":37238,"bootTime":1698249632,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 19:21:10.625849   81271 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 19:21:10.647927   81271 out.go:177] * [running-upgrade-961000] minikube v1.31.2 on Darwin 14.0
	I1025 19:21:10.743469   81271 out.go:177]   - MINIKUBE_LOCATION=17491
	I1025 19:21:10.722622   81271 notify.go:220] Checking for updates...
	I1025 19:21:10.801327   81271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 19:21:10.859227   81271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 19:21:10.880367   81271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 19:21:10.901415   81271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	I1025 19:21:10.922387   81271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 19:21:10.944001   81271 config.go:182] Loaded profile config "running-upgrade-961000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1025 19:21:10.944037   81271 start_flags.go:697] config upgrade: Driver=hyperkit
	I1025 19:21:10.944050   81271 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 19:21:10.944165   81271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/running-upgrade-961000/config.json ...
	I1025 19:21:10.945341   81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:21:10.945407   81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:21:10.954405   81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53945
	I1025 19:21:10.954773   81271 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:21:10.955229   81271 main.go:141] libmachine: Using API Version  1
	I1025 19:21:10.955256   81271 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:21:10.955504   81271 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:21:10.955606   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:10.976345   81271 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 19:21:10.997262   81271 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 19:21:10.997704   81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:21:10.997748   81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:21:11.006994   81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53947
	I1025 19:21:11.007353   81271 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:21:11.007726   81271 main.go:141] libmachine: Using API Version  1
	I1025 19:21:11.007744   81271 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:21:11.007950   81271 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:21:11.008057   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:11.057537   81271 out.go:177] * Using the hyperkit driver based on existing profile
	I1025 19:21:11.078171   81271 start.go:298] selected driver: hyperkit
	I1025 19:21:11.078187   81271 start.go:902] validating driver "hyperkit" against &{Name:running-upgrade-961000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.87.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 19:21:11.078306   81271 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 19:21:11.082210   81271 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.082308   81271 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17491-76819/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1025 19:21:11.090056   81271 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1025 19:21:11.094359   81271 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:21:11.094383   81271 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1025 19:21:11.094465   81271 cni.go:84] Creating CNI manager for ""
	I1025 19:21:11.094486   81271 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 19:21:11.094496   81271 start_flags.go:323] config:
	{Name:running-upgrade-961000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.87.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 19:21:11.094672   81271 iso.go:125] acquiring lock: {Name:mk28dd82d77e5b41d6d5779f6c9eefa1a75d61e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.136341   81271 out.go:177] * Starting control plane node running-upgrade-961000 in cluster running-upgrade-961000
	I1025 19:21:11.157216   81271 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1025 19:21:11.213550   81271 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1025 19:21:11.213661   81271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/running-upgrade-961000/config.json ...
	I1025 19:21:11.213744   81271 cache.go:107] acquiring lock: {Name:mked931b330050a138a73435356c58e13649ef3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213776   81271 cache.go:107] acquiring lock: {Name:mkb29a8422b0fd02310979164accd7236a712951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213769   81271 cache.go:107] acquiring lock: {Name:mk8ef1082aad9c42eb262d52ed78efab5e04fccf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213896   81271 cache.go:115] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 19:21:11.213887   81271 cache.go:107] acquiring lock: {Name:mk7eac97b2594b28bb1c298d5deace21a4190401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213927   81271 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 187.269µs
	I1025 19:21:11.213943   81271 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 19:21:11.213917   81271 cache.go:107] acquiring lock: {Name:mk31d0ad85400c98cc989d80a128b16f522dca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213959   81271 cache.go:107] acquiring lock: {Name:mk067a7af34b5cb1550dd1232822d08d70606ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213993   81271 cache.go:107] acquiring lock: {Name:mk49dbc2dc0236a392c1b9dfe260b782a4c19376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.213980   81271 cache.go:107] acquiring lock: {Name:mk93ff27cdba963c9a558d35e6eaabbe5d08abbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:21:11.214129   81271 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1025 19:21:11.214130   81271 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1025 19:21:11.214352   81271 start.go:365] acquiring machines lock for running-upgrade-961000: {Name:mk32146e6cf5387e84f7f533a58800680d6b59cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 19:21:11.214433   81271 start.go:369] acquired machines lock for "running-upgrade-961000" in 64.965µs
	I1025 19:21:11.214457   81271 start.go:96] Skipping create...Using existing machine configuration
	I1025 19:21:11.214468   81271 fix.go:54] fixHost starting: minikube
	I1025 19:21:11.214691   81271 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1025 19:21:11.214766   81271 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 19:21:11.214901   81271 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1025 19:21:11.214917   81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:21:11.214922   81271 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 19:21:11.214959   81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:21:11.215002   81271 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1025 19:21:11.222930   81271 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1025 19:21:11.223132   81271 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1025 19:21:11.223247   81271 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 19:21:11.223301   81271 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 19:21:11.224302   81271 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1025 19:21:11.224421   81271 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1025 19:21:11.224520   81271 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1025 19:21:11.227356   81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53949
	I1025 19:21:11.227703   81271 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:21:11.228079   81271 main.go:141] libmachine: Using API Version  1
	I1025 19:21:11.228090   81271 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:21:11.228295   81271 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:21:11.228424   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:11.228528   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetState
	I1025 19:21:11.228627   81271 main.go:141] libmachine: (running-upgrade-961000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:21:11.228692   81271 main.go:141] libmachine: (running-upgrade-961000) DBG | hyperkit pid from json: 81168
	I1025 19:21:11.229840   81271 fix.go:102] recreateIfNeeded on running-upgrade-961000: state=Running err=<nil>
	W1025 19:21:11.229856   81271 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 19:21:11.271845   81271 out.go:177] * Updating the running hyperkit "running-upgrade-961000" VM ...
	I1025 19:21:11.292777   81271 machine.go:88] provisioning docker machine ...
	I1025 19:21:11.292795   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:11.292958   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
	I1025 19:21:11.293062   81271 buildroot.go:166] provisioning hostname "running-upgrade-961000"
	I1025 19:21:11.293077   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
	I1025 19:21:11.293180   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.293260   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.293348   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.293427   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.293498   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.293590   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:11.294056   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:11.294065   81271 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-961000 && echo "running-upgrade-961000" | sudo tee /etc/hostname
	I1025 19:21:11.375021   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-961000
	
	I1025 19:21:11.375044   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.375182   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.375278   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.375372   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.375475   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.375610   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:11.375851   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:11.375866   81271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-961000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-961000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-961000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 19:21:11.452042   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 19:21:11.452075   81271 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17491-76819/.minikube CaCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17491-76819/.minikube}
	I1025 19:21:11.452097   81271 buildroot.go:174] setting up certificates
	I1025 19:21:11.452112   81271 provision.go:83] configureAuth start
	I1025 19:21:11.452120   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
	I1025 19:21:11.452266   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetIP
	I1025 19:21:11.452369   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.452472   81271 provision.go:138] copyHostCerts
	I1025 19:21:11.452540   81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem, removing ...
	I1025 19:21:11.452549   81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem
	I1025 19:21:11.452674   81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem (1082 bytes)
	I1025 19:21:11.452895   81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem, removing ...
	I1025 19:21:11.452901   81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem
	I1025 19:21:11.452974   81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem (1123 bytes)
	I1025 19:21:11.453149   81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem, removing ...
	I1025 19:21:11.453155   81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem
	I1025 19:21:11.453232   81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem (1679 bytes)
	I1025 19:21:11.453381   81271 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-961000 san=[192.168.87.11 192.168.87.11 localhost 127.0.0.1 minikube running-upgrade-961000]
	I1025 19:21:11.611480   81271 provision.go:172] copyRemoteCerts
	I1025 19:21:11.611535   81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 19:21:11.611571   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.611728   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.611813   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.611894   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.611978   81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
	I1025 19:21:11.654645   81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 19:21:11.664752   81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 19:21:11.674218   81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 19:21:11.684536   81271 provision.go:86] duration metric: configureAuth took 232.41789ms
	I1025 19:21:11.684549   81271 buildroot.go:189] setting minikube options for container-runtime
	I1025 19:21:11.684672   81271 config.go:182] Loaded profile config "running-upgrade-961000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1025 19:21:11.684705   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:11.684845   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.684948   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.685056   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.685146   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.685234   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.685372   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:11.685606   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:11.685614   81271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 19:21:11.762664   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 19:21:11.762687   81271 buildroot.go:70] root file system type: tmpfs
	I1025 19:21:11.762759   81271 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 19:21:11.762779   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.762929   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.763019   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.763110   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.763187   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.763298   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:11.763537   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:11.763586   81271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 19:21:11.847526   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 19:21:11.847557   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:11.847707   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:11.847794   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.847893   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:11.847984   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:11.848122   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:11.848371   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:11.848385   81271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 19:21:11.956672   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1025 19:21:12.123838   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1025 19:21:12.456071   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1025 19:21:12.778774   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1025 19:21:13.100634   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1025 19:21:13.424089   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1025 19:21:13.711392   81271 cache.go:162] opening:  /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1025 19:21:13.828328   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1025 19:21:13.828346   81271 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 2.614572919s
	I1025 19:21:13.828357   81271 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1025 19:21:14.691715   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1025 19:21:14.691733   81271 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 3.477901917s
	I1025 19:21:14.691742   81271 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1025 19:21:17.379825   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1025 19:21:17.379841   81271 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 6.166096182s
	I1025 19:21:17.379849   81271 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1025 19:21:18.956815   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1025 19:21:18.956844   81271 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 7.743149244s
	I1025 19:21:18.956855   81271 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1025 19:21:19.466126   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1025 19:21:19.466145   81271 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 8.252619502s
	I1025 19:21:19.466154   81271 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1025 19:21:19.859265   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1025 19:21:19.859280   81271 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 8.645763898s
	I1025 19:21:19.859288   81271 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1025 19:21:23.614904   81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1025 19:21:23.614926   81271 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 12.401413299s
	I1025 19:21:23.614935   81271 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1025 19:21:23.614950   81271 cache.go:87] Successfully saved all images to host disk.
	I1025 19:21:23.896255   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 19:21:23.896274   81271 machine.go:91] provisioned docker machine in 12.603812152s
	I1025 19:21:23.896281   81271 start.go:300] post-start starting for "running-upgrade-961000" (driver="hyperkit")
	I1025 19:21:23.896290   81271 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 19:21:23.896301   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:23.896530   81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 19:21:23.896544   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:23.896644   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:23.896733   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:23.896816   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:23.896893   81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
	I1025 19:21:23.941345   81271 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 19:21:23.943962   81271 info.go:137] Remote host: Buildroot 2019.02.7
	I1025 19:21:23.943973   81271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/addons for local assets ...
	I1025 19:21:23.944056   81271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/files for local assets ...
	I1025 19:21:23.944228   81271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem -> 772902.pem in /etc/ssl/certs
	I1025 19:21:23.944411   81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 19:21:23.948442   81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem --> /etc/ssl/certs/772902.pem (1708 bytes)
	I1025 19:21:23.957444   81271 start.go:303] post-start completed in 61.157111ms
	I1025 19:21:23.957457   81271 fix.go:56] fixHost completed within 12.743323675s
	I1025 19:21:23.957474   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:23.957600   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:23.957690   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:23.957786   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:23.957875   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:23.957996   81271 main.go:141] libmachine: Using SSH client type: native
	I1025 19:21:23.958244   81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.11 22 <nil> <nil>}
	I1025 19:21:23.958252   81271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 19:21:24.034169   81271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698286884.254723586
	
	I1025 19:21:24.034184   81271 fix.go:206] guest clock: 1698286884.254723586
	I1025 19:21:24.034190   81271 fix.go:219] Guest: 2023-10-25 19:21:24.254723586 -0700 PDT Remote: 2023-10-25 19:21:23.957463 -0700 PDT m=+13.402640461 (delta=297.260586ms)
	I1025 19:21:24.034206   81271 fix.go:190] guest clock delta is within tolerance: 297.260586ms
	I1025 19:21:24.034210   81271 start.go:83] releasing machines lock for "running-upgrade-961000", held for 12.820101842s
	I1025 19:21:24.034225   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:24.034359   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetIP
	I1025 19:21:24.034454   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:24.034758   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:24.034858   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
	I1025 19:21:24.034917   81271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 19:21:24.034951   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:24.035008   81271 ssh_runner.go:195] Run: cat /version.json
	I1025 19:21:24.035022   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
	I1025 19:21:24.035038   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:24.035151   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
	I1025 19:21:24.035165   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:24.035257   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
	I1025 19:21:24.035270   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:24.035347   81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
	I1025 19:21:24.035360   81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
	I1025 19:21:24.035444   81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
	W1025 19:21:24.124825   81271 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 19:21:24.124902   81271 ssh_runner.go:195] Run: systemctl --version
	I1025 19:21:24.130195   81271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 19:21:24.133765   81271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 19:21:24.133817   81271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 19:21:24.137443   81271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 19:21:24.140978   81271 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1025 19:21:24.141000   81271 start.go:472] detecting cgroup driver to use...
	I1025 19:21:24.141098   81271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 19:21:24.148289   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1025 19:21:24.152695   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 19:21:24.156798   81271 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 19:21:24.156845   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 19:21:24.161555   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 19:21:24.165802   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 19:21:24.170045   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 19:21:24.174147   81271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 19:21:24.178989   81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 19:21:24.183346   81271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 19:21:24.187068   81271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 19:21:24.190749   81271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:21:24.256538   81271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 19:21:24.267075   81271 start.go:472] detecting cgroup driver to use...
	I1025 19:21:24.267165   81271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 19:21:24.291080   81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 19:21:24.298637   81271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 19:21:24.314361   81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 19:21:24.320495   81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 19:21:24.328306   81271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 19:21:24.336222   81271 ssh_runner.go:195] Run: which cri-dockerd
	I1025 19:21:24.338331   81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 19:21:24.342252   81271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 19:21:24.348763   81271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 19:21:24.405869   81271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 19:21:24.475458   81271 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 19:21:24.475547   81271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 19:21:24.482383   81271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:21:24.549828   81271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 19:21:25.742930   81271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.193111097s)
	I1025 19:21:25.765158   81271 out.go:177] 
	W1025 19:21:25.786366   81271 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W1025 19:21:25.786394   81271 out.go:239] * 
	* 
	W1025 19:21:25.787562   81271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 19:21:25.852494   81271 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-25 19:21:25.90566 -0700 PDT m=+2189.939236954
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-961000 -n running-upgrade-961000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-961000 -n running-upgrade-961000: exit status 6 (145.401661ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 19:21:26.043596   81384 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-961000" does not appear in /Users/jenkins/minikube-integration/17491-76819/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-961000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-961000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-961000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-961000: (1.499744428s)
--- FAIL: TestRunningBinaryUpgrade (107.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 "sudo crictl images -o json": exit status 1 (142.376222ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-159000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-159000 logs -n 25: (2.133380635s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-182000 sudo                                 | kubenet-182000         | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-182000 sudo                                 | kubenet-182000         | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT | 25 Oct 23 19:34 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-182000 sudo find                            | kubenet-182000         | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT | 25 Oct 23 19:34 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-182000 sudo crio                            | kubenet-182000         | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT | 25 Oct 23 19:34 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-182000                                      | kubenet-182000         | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT | 25 Oct 23 19:34 PDT |
	| start   | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:34 PDT | 25 Oct 23 19:35 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-159000        | old-k8s-version-159000 | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-159000                              | old-k8s-version-159000 | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-080000             | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-159000             | old-k8s-version-159000 | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-159000                              | old-k8s-version-159000 | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:43 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-080000                  | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:35 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:35 PDT | 25 Oct 23 19:40 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-080000 sudo                              | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:40 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:40 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:40 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:40 PDT |
	| delete  | -p no-preload-080000                                   | no-preload-080000      | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:40 PDT |
	| start   | -p embed-certs-195000                                  | embed-certs-195000     | jenkins | v1.31.2 | 25 Oct 23 19:40 PDT | 25 Oct 23 19:41 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-195000            | embed-certs-195000     | jenkins | v1.31.2 | 25 Oct 23 19:41 PDT | 25 Oct 23 19:41 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-195000                                  | embed-certs-195000     | jenkins | v1.31.2 | 25 Oct 23 19:41 PDT | 25 Oct 23 19:42 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-195000                 | embed-certs-195000     | jenkins | v1.31.2 | 25 Oct 23 19:42 PDT | 25 Oct 23 19:42 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-195000                                  | embed-certs-195000     | jenkins | v1.31.2 | 25 Oct 23 19:42 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                        |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-159000 sudo                         | old-k8s-version-159000 | jenkins | v1.31.2 | 25 Oct 23 19:43 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 19:42:02
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 19:42:02.944210   84989 out.go:296] Setting OutFile to fd 1 ...
	I1025 19:42:02.944422   84989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:42:02.944427   84989 out.go:309] Setting ErrFile to fd 2...
	I1025 19:42:02.944432   84989 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:42:02.944600   84989 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 19:42:02.946066   84989 out.go:303] Setting JSON to false
	I1025 19:42:02.968231   84989 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":38490,"bootTime":1698249632,"procs":496,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 19:42:02.968330   84989 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 19:42:02.990564   84989 out.go:177] * [embed-certs-195000] minikube v1.31.2 on Darwin 14.0
	I1025 19:42:03.053139   84989 out.go:177]   - MINIKUBE_LOCATION=17491
	I1025 19:42:03.032251   84989 notify.go:220] Checking for updates...
	I1025 19:42:03.095209   84989 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 19:42:03.116197   84989 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 19:42:03.136939   84989 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 19:42:03.158171   84989 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	I1025 19:42:03.179169   84989 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 19:42:03.200438   84989 config.go:182] Loaded profile config "embed-certs-195000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 19:42:03.200789   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:03.200840   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:03.209166   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57843
	I1025 19:42:03.209517   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:03.209918   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:03.209934   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:03.210142   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:03.210237   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:03.210416   84989 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 19:42:03.210647   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:03.210669   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:03.218559   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57845
	I1025 19:42:03.218870   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:03.219231   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:03.219244   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:03.219445   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:03.219551   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:03.248186   84989 out.go:177] * Using the hyperkit driver based on existing profile
	I1025 19:42:03.290022   84989 start.go:298] selected driver: hyperkit
	I1025 19:42:03.290035   84989 start.go:902] validating driver "hyperkit" against &{Name:embed-certs-195000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.87.28 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedP
orts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 19:42:03.290139   84989 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 19:42:03.293367   84989 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:42:03.293478   84989 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17491-76819/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1025 19:42:03.301490   84989 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1025 19:42:03.305365   84989 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:03.305386   84989 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1025 19:42:03.305515   84989 start_flags.go:934] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 19:42:03.305581   84989 cni.go:84] Creating CNI manager for ""
	I1025 19:42:03.305593   84989 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 19:42:03.305604   84989 start_flags.go:323] config:
	{Name:embed-certs-195000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:embed-certs-195000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.87.28 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 19:42:03.305752   84989 iso.go:125] acquiring lock: {Name:mk28dd82d77e5b41d6d5779f6c9eefa1a75d61e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 19:42:03.348132   84989 out.go:177] * Starting control plane node embed-certs-195000 in cluster embed-certs-195000
	I1025 19:42:03.368980   84989 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 19:42:03.369021   84989 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 19:42:03.369047   84989 cache.go:56] Caching tarball of preloaded images
	I1025 19:42:03.369146   84989 preload.go:174] Found /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 19:42:03.369157   84989 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 19:42:03.369239   84989 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/config.json ...
	I1025 19:42:03.369715   84989 start.go:365] acquiring machines lock for embed-certs-195000: {Name:mk32146e6cf5387e84f7f533a58800680d6b59cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 19:42:03.369777   84989 start.go:369] acquired machines lock for "embed-certs-195000" in 41.172µs
	I1025 19:42:03.369794   84989 start.go:96] Skipping create...Using existing machine configuration
	I1025 19:42:03.369804   84989 fix.go:54] fixHost starting: 
	I1025 19:42:03.370024   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:03.370051   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:03.377921   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57847
	I1025 19:42:03.378277   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:03.378676   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:03.378692   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:03.378906   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:03.379006   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:03.379100   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:03.379176   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:03.379243   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 84943
	I1025 19:42:03.380295   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid 84943 missing from process table
	I1025 19:42:03.380337   84989 fix.go:102] recreateIfNeeded on embed-certs-195000: state=Stopped err=<nil>
	I1025 19:42:03.380366   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	W1025 19:42:03.380457   84989 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 19:42:03.422146   84989 out.go:177] * Restarting existing hyperkit VM for "embed-certs-195000" ...
	I1025 19:42:02.238375   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:02.238390   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 19:42:02.238395   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:02.238400   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:02.238404   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:02.238423   84617 retry.go:31] will retry after 822.048645ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:03.063318   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:03.063335   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 19:42:03.063340   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:03.063346   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:03.063349   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:03.063358   84617 retry.go:31] will retry after 1.185462104s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:04.253159   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:04.253175   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 19:42:04.253181   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:04.253195   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:04.253200   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:04.253210   84617 retry.go:31] will retry after 1.153222196s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:05.467570   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:05.467585   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:05.467589   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:05.467594   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:05.467607   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:05.467617   84617 retry.go:31] will retry after 2.306573873s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:03.445011   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Start
	I1025 19:42:03.445254   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:03.445308   84989 main.go:141] libmachine: (embed-certs-195000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/hyperkit.pid
	I1025 19:42:03.446457   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid 84943 missing from process table
	I1025 19:42:03.446471   84989 main.go:141] libmachine: (embed-certs-195000) DBG | pid 84943 is in state "Stopped"
	I1025 19:42:03.446491   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/hyperkit.pid...
	I1025 19:42:03.446639   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Using UUID 15ea7d34-73a9-11ee-b318-149d997fca88
	I1025 19:42:03.473719   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Generated MAC c6:b3:cb:30:a7:e0
	I1025 19:42:03.473741   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-195000
	I1025 19:42:03.473909   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"15ea7d34-73a9-11ee-b318-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000449e30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", proc
ess:(*os.Process)(nil)}
	I1025 19:42:03.473988   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"15ea7d34-73a9-11ee-b318-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000449e30)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", proc
ess:(*os.Process)(nil)}
	I1025 19:42:03.474054   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "15ea7d34-73a9-11ee-b318-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/embed-certs-195000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/tty,log=/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/bzimage,/Users/jenkins/min
ikube-integration/17491-76819/.minikube/machines/embed-certs-195000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-195000"}
	I1025 19:42:03.474116   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 15ea7d34-73a9-11ee-b318-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/embed-certs-195000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/tty,log=/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/console-ring -f kexec,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/bzimage,/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-1
95000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-195000"
	I1025 19:42:03.474134   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1025 19:42:03.475676   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 DEBUG: hyperkit: Pid is 85000
	I1025 19:42:03.476109   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Attempt 0
	I1025 19:42:03.476144   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:03.476217   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:03.478243   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Searching for c6:b3:cb:30:a7:e0 in /var/db/dhcpd_leases ...
	I1025 19:42:03.478715   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Found 415 entries in /var/db/dhcpd_leases!
	I1025 19:42:03.478732   84989 main.go:141] libmachine: (embed-certs-195000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.87.28 HWAddress:c6:b3:cb:30:a7:e0 ID:1,c6:b3:cb:30:a7:e0 Lease:0x653b233f}
	I1025 19:42:03.478743   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Found match: c6:b3:cb:30:a7:e0
	I1025 19:42:03.478751   84989 main.go:141] libmachine: (embed-certs-195000) DBG | IP: 192.168.87.28
	I1025 19:42:03.478810   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetConfigRaw
	I1025 19:42:03.479475   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetIP
	I1025 19:42:03.479677   84989 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/config.json ...
	I1025 19:42:03.480100   84989 machine.go:88] provisioning docker machine ...
	I1025 19:42:03.480113   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:03.480237   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetMachineName
	I1025 19:42:03.480337   84989 buildroot.go:166] provisioning hostname "embed-certs-195000"
	I1025 19:42:03.480347   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetMachineName
	I1025 19:42:03.480448   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:03.480550   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:03.480672   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:03.480805   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:03.480926   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:03.481121   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:03.481483   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:03.481494   84989 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-195000 && echo "embed-certs-195000" | sudo tee /etc/hostname
	I1025 19:42:03.484490   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1025 19:42:03.492658   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1025 19:42:03.493603   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1025 19:42:03.493636   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1025 19:42:03.493677   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1025 19:42:03.493697   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1025 19:42:03.868111   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1025 19:42:03.868126   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1025 19:42:03.972148   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1025 19:42:03.972169   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1025 19:42:03.972180   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1025 19:42:03.972191   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1025 19:42:03.973032   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1025 19:42:03.973048   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:03 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1025 19:42:07.778314   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:07.778329   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:07.778333   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:07.778339   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:07.778351   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:07.778364   84617 retry.go:31] will retry after 2.236265774s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:10.018538   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:10.018554   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:10.018558   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:10.018566   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:10.018571   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:10.018581   84617 retry.go:31] will retry after 2.930683928s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:08.932561   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1025 19:42:08.932578   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1025 19:42:08.932591   84989 main.go:141] libmachine: (embed-certs-195000) DBG | 2023/10/25 19:42:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1025 19:42:12.952679   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:12.952692   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:12.952699   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:12.952704   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:12.952709   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:12.952719   84617 retry.go:31] will retry after 3.661167426s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:16.616512   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:16.616526   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:16.616530   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:16.616535   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:16.616539   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:16.616550   84617 retry.go:31] will retry after 3.439946814s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:16.673797   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-195000
	
	I1025 19:42:16.673817   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:16.673950   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:16.674048   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:16.674169   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:16.674256   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:16.674386   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:16.674629   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:16.674642   84989 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-195000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-195000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-195000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 19:42:16.745413   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 19:42:16.745435   84989 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17491-76819/.minikube CaCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17491-76819/.minikube}
	I1025 19:42:16.745449   84989 buildroot.go:174] setting up certificates
	I1025 19:42:16.745460   84989 provision.go:83] configureAuth start
	I1025 19:42:16.745467   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetMachineName
	I1025 19:42:16.745597   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetIP
	I1025 19:42:16.745708   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:16.745789   84989 provision.go:138] copyHostCerts
	I1025 19:42:16.745867   84989 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem, removing ...
	I1025 19:42:16.745876   84989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem
	I1025 19:42:16.746016   84989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem (1082 bytes)
	I1025 19:42:16.746247   84989 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem, removing ...
	I1025 19:42:16.746253   84989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem
	I1025 19:42:16.746317   84989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem (1123 bytes)
	I1025 19:42:16.746715   84989 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem, removing ...
	I1025 19:42:16.746722   84989 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem
	I1025 19:42:16.746793   84989 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem (1679 bytes)
	I1025 19:42:16.746946   84989 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem org=jenkins.embed-certs-195000 san=[192.168.87.28 192.168.87.28 localhost 127.0.0.1 minikube embed-certs-195000]
	I1025 19:42:16.999540   84989 provision.go:172] copyRemoteCerts
	I1025 19:42:16.999623   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 19:42:16.999644   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:16.999791   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:16.999904   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.000008   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.000098   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:17.038372   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 19:42:17.054079   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 19:42:17.069747   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 19:42:17.085740   84989 provision.go:86] duration metric: configureAuth took 340.256868ms
	I1025 19:42:17.085752   84989 buildroot.go:189] setting minikube options for container-runtime
	I1025 19:42:17.085932   84989 config.go:182] Loaded profile config "embed-certs-195000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 19:42:17.085949   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.086081   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.086167   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.086249   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.086325   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.086399   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.086509   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:17.086746   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:17.086755   84989 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 19:42:17.152638   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 19:42:17.152653   84989 buildroot.go:70] root file system type: tmpfs
	I1025 19:42:17.152730   84989 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 19:42:17.152744   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.152869   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.152956   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.153050   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.153133   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.153261   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:17.153498   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:17.153546   84989 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 19:42:17.228041   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 19:42:17.228068   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.228205   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.228291   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.228373   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.228483   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.228606   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:17.228851   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:17.228864   84989 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 19:42:17.796129   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 19:42:17.796142   84989 machine.go:91] provisioned docker machine in 14.315617703s
	I1025 19:42:17.796148   84989 start.go:300] post-start starting for "embed-certs-195000" (driver="hyperkit")
	I1025 19:42:17.796158   84989 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 19:42:17.796170   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.796349   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 19:42:17.796363   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.796459   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.796557   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.796669   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.796752   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:17.837238   84989 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 19:42:17.839867   84989 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 19:42:17.839887   84989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/addons for local assets ...
	I1025 19:42:17.839973   84989 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/files for local assets ...
	I1025 19:42:17.840113   84989 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem -> 772902.pem in /etc/ssl/certs
	I1025 19:42:17.840275   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 19:42:17.846406   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem --> /etc/ssl/certs/772902.pem (1708 bytes)
	I1025 19:42:17.861817   84989 start.go:303] post-start completed in 65.660035ms
	I1025 19:42:17.861835   84989 fix.go:56] fixHost completed within 14.491613105s
	I1025 19:42:17.861870   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.862014   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.862111   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.862200   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.862298   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.862413   84989 main.go:141] libmachine: Using SSH client type: native
	I1025 19:42:17.862662   84989 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil>  [] 0s} 192.168.87.28 22 <nil> <nil>}
	I1025 19:42:17.862670   84989 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1025 19:42:17.929859   84989 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698288137.999745633
	
	I1025 19:42:17.929875   84989 fix.go:206] guest clock: 1698288137.999745633
	I1025 19:42:17.929880   84989 fix.go:219] Guest: 2023-10-25 19:42:17.999745633 -0700 PDT Remote: 2023-10-25 19:42:17.861859 -0700 PDT m=+14.962439116 (delta=137.886633ms)
	I1025 19:42:17.929895   84989 fix.go:190] guest clock delta is within tolerance: 137.886633ms
	I1025 19:42:17.929902   84989 start.go:83] releasing machines lock for "embed-certs-195000", held for 14.559694333s
	I1025 19:42:17.929918   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.930052   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetIP
	I1025 19:42:17.930164   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.930460   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.930570   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:17.930681   84989 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 19:42:17.930701   84989 ssh_runner.go:195] Run: cat /version.json
	I1025 19:42:17.930712   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.930714   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:17.930837   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.930853   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:17.930961   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.931026   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:17.931074   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.931134   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:17.931162   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:17.931222   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:17.964895   84989 ssh_runner.go:195] Run: systemctl --version
	I1025 19:42:18.016964   84989 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 19:42:18.021136   84989 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 19:42:18.021174   84989 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 19:42:18.031444   84989 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 19:42:18.031466   84989 start.go:472] detecting cgroup driver to use...
	I1025 19:42:18.031583   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 19:42:18.043718   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 19:42:18.050263   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 19:42:18.056794   84989 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 19:42:18.056838   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 19:42:18.063532   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 19:42:18.070235   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 19:42:18.077232   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 19:42:18.084199   84989 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 19:42:18.091262   84989 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 19:42:18.097899   84989 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 19:42:18.103782   84989 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 19:42:18.109584   84989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:42:18.189606   84989 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 19:42:18.201414   84989 start.go:472] detecting cgroup driver to use...
	I1025 19:42:18.201497   84989 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 19:42:18.213380   84989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 19:42:18.223027   84989 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 19:42:18.235081   84989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 19:42:18.244111   84989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 19:42:18.252978   84989 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 19:42:18.280505   84989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 19:42:18.288931   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 19:42:18.301320   84989 ssh_runner.go:195] Run: which cri-dockerd
	I1025 19:42:18.303798   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 19:42:18.309365   84989 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 19:42:18.320510   84989 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 19:42:18.405859   84989 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 19:42:18.494558   84989 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 19:42:18.494643   84989 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 19:42:18.506205   84989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:42:18.601036   84989 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 19:42:19.876683   84989 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.27559071s)
	I1025 19:42:19.876741   84989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 19:42:19.976722   84989 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 19:42:20.060709   84989 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 19:42:20.160727   84989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:42:20.260235   84989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 19:42:20.276641   84989 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 19:42:20.382122   84989 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 19:42:20.441354   84989 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 19:42:20.441430   84989 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 19:42:20.445089   84989 start.go:540] Will wait 60s for crictl version
	I1025 19:42:20.445133   84989 ssh_runner.go:195] Run: which crictl
	I1025 19:42:20.447653   84989 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 19:42:20.485290   84989 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 19:42:20.485362   84989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 19:42:20.502803   84989 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 19:42:20.060732   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:20.060743   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:20.060748   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:20.060754   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:20.060774   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:20.060785   84617 retry.go:31] will retry after 5.130323873s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:20.562186   84989 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 19:42:20.562268   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetIP
	I1025 19:42:20.562659   84989 ssh_runner.go:195] Run: grep 192.168.87.1	host.minikube.internal$ /etc/hosts
	I1025 19:42:20.566932   84989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.87.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 19:42:20.575482   84989 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 19:42:20.575540   84989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 19:42:20.588701   84989 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1025 19:42:20.588721   84989 docker.go:619] Images already preloaded, skipping extraction
	I1025 19:42:20.588791   84989 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 19:42:20.601705   84989 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1025 19:42:20.601731   84989 cache_images.go:84] Images are preloaded, skipping loading
	I1025 19:42:20.601803   84989 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 19:42:20.619043   84989 cni.go:84] Creating CNI manager for ""
	I1025 19:42:20.619057   84989 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 19:42:20.619072   84989 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 19:42:20.619087   84989 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.87.28 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-195000 NodeName:embed-certs-195000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.87.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.87.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 19:42:20.619174   84989 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.87.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-195000"
	  kubeletExtraArgs:
	    node-ip: 192.168.87.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.87.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 19:42:20.619235   84989 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-195000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.87.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:embed-certs-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 19:42:20.619289   84989 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 19:42:20.625414   84989 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 19:42:20.625455   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 19:42:20.631397   84989 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 19:42:20.642505   84989 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 19:42:20.653677   84989 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I1025 19:42:20.664914   84989 ssh_runner.go:195] Run: grep 192.168.87.28	control-plane.minikube.internal$ /etc/hosts
	I1025 19:42:20.667388   84989 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.87.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 19:42:20.675792   84989 certs.go:56] Setting up /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000 for IP: 192.168.87.28
	I1025 19:42:20.675810   84989 certs.go:190] acquiring lock for shared ca certs: {Name:mk56451a86b29b7de481f6b11a773d5bea97e8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 19:42:20.675951   84989 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.key
	I1025 19:42:20.676001   84989 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17491-76819/.minikube/proxy-client-ca.key
	I1025 19:42:20.676080   84989 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/client.key
	I1025 19:42:20.676138   84989 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/apiserver.key.8ef87e84
	I1025 19:42:20.676182   84989 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/proxy-client.key
	I1025 19:42:20.676350   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/77290.pem (1338 bytes)
	W1025 19:42:20.676385   84989 certs.go:433] ignoring /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/77290_empty.pem, impossibly tiny 0 bytes
	I1025 19:42:20.676393   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 19:42:20.676422   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem (1082 bytes)
	I1025 19:42:20.676457   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem (1123 bytes)
	I1025 19:42:20.676488   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem (1679 bytes)
	I1025 19:42:20.676552   84989 certs.go:437] found cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem (1708 bytes)
	I1025 19:42:20.677086   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 19:42:20.693283   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 19:42:20.709377   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 19:42:20.725636   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/embed-certs-195000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 19:42:20.741741   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 19:42:20.758242   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1025 19:42:20.774268   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 19:42:20.790020   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 19:42:20.806075   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem --> /usr/share/ca-certificates/772902.pem (1708 bytes)
	I1025 19:42:20.822020   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 19:42:20.838058   84989 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/77290.pem --> /usr/share/ca-certificates/77290.pem (1338 bytes)
	I1025 19:42:20.854081   84989 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 19:42:20.865431   84989 ssh_runner.go:195] Run: openssl version
	I1025 19:42:20.869127   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/772902.pem && ln -fs /usr/share/ca-certificates/772902.pem /etc/ssl/certs/772902.pem"
	I1025 19:42:20.876302   84989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/772902.pem
	I1025 19:42:20.879218   84989 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 01:51 /usr/share/ca-certificates/772902.pem
	I1025 19:42:20.879254   84989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/772902.pem
	I1025 19:42:20.882870   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/772902.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 19:42:20.889498   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 19:42:20.896160   84989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 19:42:20.899149   84989 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 01:46 /usr/share/ca-certificates/minikubeCA.pem
	I1025 19:42:20.899181   84989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 19:42:20.902787   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 19:42:20.909351   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77290.pem && ln -fs /usr/share/ca-certificates/77290.pem /etc/ssl/certs/77290.pem"
	I1025 19:42:20.916202   84989 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77290.pem
	I1025 19:42:20.919349   84989 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 01:51 /usr/share/ca-certificates/77290.pem
	I1025 19:42:20.919385   84989 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77290.pem
	I1025 19:42:20.923007   84989 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77290.pem /etc/ssl/certs/51391683.0"
	I1025 19:42:20.929665   84989 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 19:42:20.932409   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 19:42:20.936110   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 19:42:20.939902   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 19:42:20.943484   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 19:42:20.947219   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 19:42:20.950831   84989 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 19:42:20.954522   84989 kubeadm.go:404] StartCluster: {Name:embed-certs-195000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.3 ClusterName:embed-certs-195000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.87.28 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 19:42:20.954612   84989 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 19:42:20.967417   84989 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 19:42:20.973838   84989 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 19:42:20.973856   84989 kubeadm.go:636] restartCluster start
	I1025 19:42:20.973900   84989 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 19:42:20.979813   84989 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:20.980230   84989 kubeconfig.go:135] verify returned: extract IP: "embed-certs-195000" does not appear in /Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 19:42:20.980382   84989 kubeconfig.go:146] "embed-certs-195000" context is missing from /Users/jenkins/minikube-integration/17491-76819/kubeconfig - will repair!
	I1025 19:42:20.980726   84989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17491-76819/kubeconfig: {Name:mkd34fe72df098023f5c63e89fff2d5fe1ec696f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 19:42:20.982138   84989 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 19:42:20.987916   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:20.987954   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:20.995920   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:20.995933   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:20.995967   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:21.003665   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:21.504257   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:21.504385   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:21.514103   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:22.003806   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:22.003861   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:22.012222   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:22.505701   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:22.505903   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:22.515234   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:25.195182   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:25.195195   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:25.195200   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:25.195205   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:25.195212   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:25.195221   84617 retry.go:31] will retry after 7.512373564s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:23.004693   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:23.004821   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:23.014740   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:23.505162   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:23.505329   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:23.515117   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:24.004058   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:24.004193   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:24.013746   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:24.503861   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:24.503979   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:24.513124   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:25.004104   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:25.004203   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:25.013843   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:25.505368   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:25.505510   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:25.515393   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:26.005282   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:26.005383   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:26.015127   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:26.504553   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:26.504688   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:26.514297   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:27.003936   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:27.004043   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:27.012633   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:27.504055   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:27.504197   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:27.512657   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:28.005453   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:28.005605   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:28.015277   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:28.504500   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:28.504646   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:28.514107   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:29.005097   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:29.005250   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:29.014824   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:29.506127   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:29.506268   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:29.515753   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:30.004175   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:30.004349   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:30.013062   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:30.504081   84989 api_server.go:166] Checking apiserver status ...
	I1025 19:42:30.504182   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 19:42:30.513776   84989 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 19:42:30.989344   84989 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1025 19:42:30.989465   84989 kubeadm.go:1128] stopping kube-system containers ...
	I1025 19:42:30.989577   84989 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 19:42:31.005708   84989 docker.go:464] Stopping containers: [ffe0c9555966 22dec598ad8e b53a0d15b8f7 8d09eec241b2 5fe4ac0491a2 c35bde292d52 efaa68fe2567 f04996b70a87 337ef1127829 289ee2f106d5 64103b068d27 90f2d8d99223 9889364b105a 0131227d7c8d 403c3aab8293]
	I1025 19:42:31.005787   84989 ssh_runner.go:195] Run: docker stop ffe0c9555966 22dec598ad8e b53a0d15b8f7 8d09eec241b2 5fe4ac0491a2 c35bde292d52 efaa68fe2567 f04996b70a87 337ef1127829 289ee2f106d5 64103b068d27 90f2d8d99223 9889364b105a 0131227d7c8d 403c3aab8293
	I1025 19:42:31.019600   84989 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 19:42:31.030201   84989 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 19:42:31.036124   84989 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 19:42:31.036168   84989 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 19:42:31.042050   84989 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 19:42:31.042060   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:31.104565   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:31.887283   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:32.022387   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:32.094800   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:32.149744   84989 api_server.go:52] waiting for apiserver process to appear ...
	I1025 19:42:32.149805   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:42:32.161493   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:42:32.674962   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:42:32.710746   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:32.710761   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:32.710765   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:32.710770   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:32.710787   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:32.710799   84617 retry.go:31] will retry after 9.845646228s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:33.175819   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:42:33.674277   84989 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:42:33.691369   84989 api_server.go:72] duration metric: took 1.54158255s to wait for apiserver process to appear ...
	I1025 19:42:33.691382   84989 api_server.go:88] waiting for apiserver healthz status ...
	I1025 19:42:33.691395   84989 api_server.go:253] Checking apiserver healthz at https://192.168.87.28:8443/healthz ...
	I1025 19:42:36.266188   84989 api_server.go:279] https://192.168.87.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 19:42:36.266208   84989 api_server.go:103] status: https://192.168.87.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 19:42:36.266216   84989 api_server.go:253] Checking apiserver healthz at https://192.168.87.28:8443/healthz ...
	I1025 19:42:36.317555   84989 api_server.go:279] https://192.168.87.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 19:42:36.317580   84989 api_server.go:103] status: https://192.168.87.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 19:42:36.818819   84989 api_server.go:253] Checking apiserver healthz at https://192.168.87.28:8443/healthz ...
	I1025 19:42:36.823517   84989 api_server.go:279] https://192.168.87.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 19:42:36.823531   84989 api_server.go:103] status: https://192.168.87.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 19:42:37.318313   84989 api_server.go:253] Checking apiserver healthz at https://192.168.87.28:8443/healthz ...
	I1025 19:42:37.324794   84989 api_server.go:279] https://192.168.87.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1025 19:42:37.324810   84989 api_server.go:103] status: https://192.168.87.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1025 19:42:37.818620   84989 api_server.go:253] Checking apiserver healthz at https://192.168.87.28:8443/healthz ...
	I1025 19:42:37.822008   84989 api_server.go:279] https://192.168.87.28:8443/healthz returned 200:
	ok
	I1025 19:42:37.827638   84989 api_server.go:141] control plane version: v1.28.3
	I1025 19:42:37.827653   84989 api_server.go:131] duration metric: took 4.136146857s to wait for apiserver health ...
	I1025 19:42:37.827659   84989 cni.go:84] Creating CNI manager for ""
	I1025 19:42:37.827668   84989 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 19:42:37.850869   84989 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 19:42:37.871766   84989 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 19:42:37.879639   84989 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1025 19:42:37.907129   84989 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 19:42:37.913120   84989 system_pods.go:59] 8 kube-system pods found
	I1025 19:42:37.913140   84989 system_pods.go:61] "coredns-5dd5756b68-bzgq8" [5854bc87-034c-42ae-b9b8-6e51a2cf509e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 19:42:37.913148   84989 system_pods.go:61] "etcd-embed-certs-195000" [3345e59e-6ad4-47a5-803a-afd6df44f74e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 19:42:37.913153   84989 system_pods.go:61] "kube-apiserver-embed-certs-195000" [c5861062-2a6f-4d49-9bef-9ea9ba0bbeb6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 19:42:37.913158   84989 system_pods.go:61] "kube-controller-manager-embed-certs-195000" [e1c43657-5fb1-42f7-b0d7-c232c546d7c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 19:42:37.913165   84989 system_pods.go:61] "kube-proxy-v55jp" [d2d74fd3-a239-4a7e-afb9-04e72a8650a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1025 19:42:37.913170   84989 system_pods.go:61] "kube-scheduler-embed-certs-195000" [c31fb26b-88ea-4de5-a344-ee1851c047b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 19:42:37.913175   84989 system_pods.go:61] "metrics-server-57f55c9bc5-2k2fv" [6c3a6b2a-63e5-48b4-a473-4933c18084d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:37.913188   84989 system_pods.go:61] "storage-provisioner" [2e5bf16b-7eb2-4ab8-940d-4f2328f0379a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 19:42:37.913194   84989 system_pods.go:74] duration metric: took 6.055046ms to wait for pod list to return data ...
	I1025 19:42:37.913200   84989 node_conditions.go:102] verifying NodePressure condition ...
	I1025 19:42:37.915439   84989 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1025 19:42:37.915462   84989 node_conditions.go:123] node cpu capacity is 2
	I1025 19:42:37.915473   84989 node_conditions.go:105] duration metric: took 2.268366ms to run NodePressure ...
	I1025 19:42:37.915490   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 19:42:38.169838   84989 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1025 19:42:38.176361   84989 kubeadm.go:787] kubelet initialised
	I1025 19:42:38.176375   84989 kubeadm.go:788] duration metric: took 6.523762ms waiting for restarted kubelet to initialise ...
	I1025 19:42:38.176382   84989 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 19:42:38.180900   84989 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:38.185200   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.185217   84989 pod_ready.go:81] duration metric: took 4.304111ms waiting for pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:38.185225   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.185233   84989 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:38.192601   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "etcd-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.192614   84989 pod_ready.go:81] duration metric: took 7.374731ms waiting for pod "etcd-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:38.192622   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "etcd-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.192634   84989 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:38.199336   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.199349   84989 pod_ready.go:81] duration metric: took 6.709459ms waiting for pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:38.199357   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.199362   84989 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:38.309698   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.309712   84989 pod_ready.go:81] duration metric: took 110.340921ms waiting for pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:38.309720   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.309730   84989 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v55jp" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:38.711460   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "kube-proxy-v55jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.711478   84989 pod_ready.go:81] duration metric: took 401.719422ms waiting for pod "kube-proxy-v55jp" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:38.711486   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "kube-proxy-v55jp" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:38.711497   84989 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:39.110694   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:39.110707   84989 pod_ready.go:81] duration metric: took 399.191169ms waiting for pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:39.110714   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:39.110719   84989 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:39.511741   84989 pod_ready.go:97] node "embed-certs-195000" hosting pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:39.511756   84989 pod_ready.go:81] duration metric: took 401.019448ms waiting for pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace to be "Ready" ...
	E1025 19:42:39.511764   84989 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-195000" hosting pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-195000" has status "Ready":"False"
	I1025 19:42:39.511772   84989 pod_ready.go:38] duration metric: took 1.335341346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 19:42:39.511784   84989 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 19:42:39.519657   84989 ops.go:34] apiserver oom_adj: -16
	I1025 19:42:39.519675   84989 kubeadm.go:640] restartCluster took 18.545269961s
	I1025 19:42:39.519681   84989 kubeadm.go:406] StartCluster complete in 18.564629476s
	I1025 19:42:39.519690   84989 settings.go:142] acquiring lock: {Name:mk1184b3673d34af589a18dc0e5575b17473d007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 19:42:39.519771   84989 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 19:42:39.520628   84989 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17491-76819/kubeconfig: {Name:mkd34fe72df098023f5c63e89fff2d5fe1ec696f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 19:42:39.520923   84989 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 19:42:39.520942   84989 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1025 19:42:39.520986   84989 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-195000"
	I1025 19:42:39.520995   84989 addons.go:69] Setting default-storageclass=true in profile "embed-certs-195000"
	I1025 19:42:39.521000   84989 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-195000"
	W1025 19:42:39.521005   84989 addons.go:240] addon storage-provisioner should already be in state true
	I1025 19:42:39.521009   84989 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-195000"
	I1025 19:42:39.521056   84989 host.go:66] Checking if "embed-certs-195000" exists ...
	I1025 19:42:39.521045   84989 addons.go:69] Setting metrics-server=true in profile "embed-certs-195000"
	I1025 19:42:39.521091   84989 config.go:182] Loaded profile config "embed-certs-195000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 19:42:39.521094   84989 addons.go:231] Setting addon metrics-server=true in "embed-certs-195000"
	W1025 19:42:39.521109   84989 addons.go:240] addon metrics-server should already be in state true
	I1025 19:42:39.521112   84989 addons.go:69] Setting dashboard=true in profile "embed-certs-195000"
	I1025 19:42:39.521154   84989 addons.go:231] Setting addon dashboard=true in "embed-certs-195000"
	W1025 19:42:39.521165   84989 addons.go:240] addon dashboard should already be in state true
	I1025 19:42:39.521182   84989 host.go:66] Checking if "embed-certs-195000" exists ...
	I1025 19:42:39.521220   84989 host.go:66] Checking if "embed-certs-195000" exists ...
	I1025 19:42:39.521325   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.521348   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.521365   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.521387   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.521470   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.521559   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.521995   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.522186   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.533506   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57873
	I1025 19:42:39.533515   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57874
	I1025 19:42:39.534020   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.534069   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.534433   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.534459   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.534597   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.534619   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.534866   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.534873   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57877
	I1025 19:42:39.534895   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.535058   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:39.535178   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:39.535284   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.535317   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:39.535354   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.535382   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.536235   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.536303   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.536448   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57879
	I1025 19:42:39.536702   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.537951   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.538128   84989 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-195000" context rescaled to 1 replicas
	I1025 19:42:39.538159   84989 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.87.28 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 19:42:39.562090   84989 out.go:177] * Verifying Kubernetes components...
	I1025 19:42:39.538375   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.538478   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.538551   84989 addons.go:231] Setting addon default-storageclass=true in "embed-certs-195000"
	I1025 19:42:39.603950   84989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1025 19:42:39.603961   84989 addons.go:240] addon default-storageclass should already be in state true
	I1025 19:42:39.603950   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.562157   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.603979   84989 host.go:66] Checking if "embed-certs-195000" exists ...
	I1025 19:42:39.544196   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57881
	I1025 19:42:39.604894   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.605038   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.605274   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.605357   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.605589   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.605627   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.605665   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.607109   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.609164   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.609634   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:39.609813   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:39.609861   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:39.612271   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:39.633908   84989 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 19:42:39.614100   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57883
	I1025 19:42:39.615933   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57884
	I1025 19:42:39.616880   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57885
	I1025 19:42:39.655033   84989 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 19:42:39.655068   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 19:42:39.655085   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:39.655235   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:39.655362   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.655366   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.655398   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.655412   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:39.655530   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:39.655650   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:39.655728   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.655744   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.655751   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.655760   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.655777   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.655791   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.655963   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.655984   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.656034   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.656109   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:39.656161   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:39.656214   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:39.656259   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:39.656301   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:39.656333   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:39.656368   84989 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:42:39.656400   84989 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:42:39.658143   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:39.658395   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:39.678898   84989 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1025 19:42:39.664728   84989 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:57890
	I1025 19:42:39.675356   84989 node_ready.go:35] waiting up to 6m0s for node "embed-certs-195000" to be "Ready" ...
	I1025 19:42:39.675395   84989 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1025 19:42:39.699844   84989 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1025 19:42:39.720965   84989 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 19:42:39.720975   84989 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 19:42:39.741995   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 19:42:39.742005   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 19:42:39.741996   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 19:42:39.742021   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:39.742032   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:39.700213   84989 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:42:39.714744   84989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 19:42:39.742191   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:39.742193   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:39.742290   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:39.742318   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:39.742397   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:39.742427   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:39.742487   84989 main.go:141] libmachine: Using API Version  1
	I1025 19:42:39.742507   84989 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:42:39.742527   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:39.742541   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:39.742733   84989 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:42:39.742839   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetState
	I1025 19:42:39.742935   84989 main.go:141] libmachine: (embed-certs-195000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:42:39.743024   84989 main.go:141] libmachine: (embed-certs-195000) DBG | hyperkit pid from json: 85000
	I1025 19:42:39.744175   84989 main.go:141] libmachine: (embed-certs-195000) Calling .DriverName
	I1025 19:42:39.744314   84989 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 19:42:39.744321   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 19:42:39.744329   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHHostname
	I1025 19:42:39.744403   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHPort
	I1025 19:42:39.744500   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHKeyPath
	I1025 19:42:39.744593   84989 main.go:141] libmachine: (embed-certs-195000) Calling .GetSSHUsername
	I1025 19:42:39.744683   84989 sshutil.go:53] new ssh client: &{IP:192.168.87.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/embed-certs-195000/id_rsa Username:docker}
	I1025 19:42:39.798048   84989 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 19:42:39.798061   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1025 19:42:39.818239   84989 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 19:42:39.818251   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 19:42:39.827734   84989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 19:42:39.842444   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 19:42:39.842457   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 19:42:39.869942   84989 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 19:42:39.869954   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 19:42:39.908071   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 19:42:39.908087   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 19:42:39.927106   84989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 19:42:39.975977   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 19:42:39.975990   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 19:42:40.016881   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 19:42:40.016893   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 19:42:40.132848   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 19:42:40.132863   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 19:42:40.172691   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 19:42:40.172715   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 19:42:40.184327   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 19:42:40.184340   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 19:42:40.196560   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 19:42:40.196572   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 19:42:40.248655   84989 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 19:42:40.248668   84989 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 19:42:40.279080   84989 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 19:42:40.912400   84989 node_ready.go:49] node "embed-certs-195000" has status "Ready":"True"
	I1025 19:42:40.912417   84989 node_ready.go:38] duration metric: took 1.212510626s waiting for node "embed-certs-195000" to be "Ready" ...
	I1025 19:42:40.912423   84989 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1025 19:42:40.916246   84989 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:41.035188   84989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.207396185s)
	I1025 19:42:41.035229   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035248   84989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.108092783s)
	I1025 19:42:41.035252   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035254   84989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.29312745s)
	I1025 19:42:41.035265   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035273   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035273   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035307   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035431   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035430   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.035443   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035446   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035449   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.035453   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035455   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035462   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035474   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035499   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035569   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035580   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035588   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.035596   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.035608   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.035608   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.035652   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.035657   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035697   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035704   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035727   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035737   84989 addons.go:467] Verifying addon metrics-server=true in "embed-certs-195000"
	I1025 19:42:41.035802   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.035812   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.035803   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.039827   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.039839   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.039983   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.039994   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.040004   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.383727   84989 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.104585175s)
	I1025 19:42:41.383754   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.383761   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.383927   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.383932   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.383935   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.383943   84989 main.go:141] libmachine: Making call to close driver server
	I1025 19:42:41.383948   84989 main.go:141] libmachine: (embed-certs-195000) Calling .Close
	I1025 19:42:41.384124   84989 main.go:141] libmachine: Successfully made call to close driver server
	I1025 19:42:41.384133   84989 main.go:141] libmachine: Making call to close connection to plugin binary
	I1025 19:42:41.384161   84989 main.go:141] libmachine: (embed-certs-195000) DBG | Closing plugin on server side
	I1025 19:42:41.424368   84989 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-195000 addons enable metrics-server	
	
	
	I1025 19:42:41.482064   84989 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1025 19:42:41.556169   84989 addons.go:502] enable addons completed in 2.035175782s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1025 19:42:42.562900   84617 system_pods.go:86] 4 kube-system pods found
	I1025 19:42:42.562916   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:42.562922   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:42.562929   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:42.562936   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:42.562948   84617 retry.go:31] will retry after 9.680400104s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1025 19:42:43.313264   84989 pod_ready.go:102] pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:45.314510   84989 pod_ready.go:102] pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:46.314644   84989 pod_ready.go:92] pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:46.314656   84989 pod_ready.go:81] duration metric: took 5.398241829s waiting for pod "coredns-5dd5756b68-bzgq8" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:46.314663   84989 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:48.324712   84989 pod_ready.go:102] pod "etcd-embed-certs-195000" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:49.325340   84989 pod_ready.go:92] pod "etcd-embed-certs-195000" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:49.325352   84989 pod_ready.go:81] duration metric: took 3.010597425s waiting for pod "etcd-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:49.325358   84989 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:49.329149   84989 pod_ready.go:92] pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:49.329159   84989 pod_ready.go:81] duration metric: took 3.796019ms waiting for pod "kube-apiserver-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:49.329165   84989 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:51.718306   84989 pod_ready.go:102] pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:52.217618   84989 pod_ready.go:92] pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:52.217631   84989 pod_ready.go:81] duration metric: took 2.888377079s waiting for pod "kube-controller-manager-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.217641   84989 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v55jp" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.221515   84989 pod_ready.go:92] pod "kube-proxy-v55jp" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:52.221528   84989 pod_ready.go:81] duration metric: took 3.882084ms waiting for pod "kube-proxy-v55jp" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.221535   84989 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.510575   84989 pod_ready.go:92] pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace has status "Ready":"True"
	I1025 19:42:52.510587   84989 pod_ready.go:81] duration metric: took 289.038915ms waiting for pod "kube-scheduler-embed-certs-195000" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.510595   84989 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace to be "Ready" ...
	I1025 19:42:52.247046   84617 system_pods.go:86] 6 kube-system pods found
	I1025 19:42:52.247059   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:42:52.247066   84617 system_pods.go:89] "kube-apiserver-old-k8s-version-159000" [ee5844e4-e931-48a1-9995-fb826625937e] Running
	I1025 19:42:52.247069   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:42:52.247073   84617 system_pods.go:89] "kube-scheduler-old-k8s-version-159000" [a4737e8f-82c3-4155-a23f-5779f5193b7b] Running
	I1025 19:42:52.247078   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:42:52.247084   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:42:52.247094   84617 retry.go:31] will retry after 15.957645242s: missing components: etcd, kube-controller-manager
	I1025 19:42:54.816102   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:56.841427   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:42:59.336713   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:01.816179   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:03.817616   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:05.818029   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:08.210054   84617 system_pods.go:86] 8 kube-system pods found
	I1025 19:43:08.210067   84617 system_pods.go:89] "coredns-5644d7b6d9-bwx2v" [4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8] Running
	I1025 19:43:08.210072   84617 system_pods.go:89] "etcd-old-k8s-version-159000" [2c003bf3-31ba-4a54-9d64-67d74a3cb934] Running
	I1025 19:43:08.210075   84617 system_pods.go:89] "kube-apiserver-old-k8s-version-159000" [ee5844e4-e931-48a1-9995-fb826625937e] Running
	I1025 19:43:08.210079   84617 system_pods.go:89] "kube-controller-manager-old-k8s-version-159000" [e0fc50a7-3d66-42df-8be0-d55f200aa271] Running
	I1025 19:43:08.210082   84617 system_pods.go:89] "kube-proxy-flhf6" [ba552d17-1dd8-484c-8ac9-f95b6e1dca83] Running
	I1025 19:43:08.210085   84617 system_pods.go:89] "kube-scheduler-old-k8s-version-159000" [a4737e8f-82c3-4155-a23f-5779f5193b7b] Running
	I1025 19:43:08.210098   84617 system_pods.go:89] "metrics-server-74d5856cc6-4ljwz" [3113f557-b6a5-4908-ba42-8d109d0c1ae0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 19:43:08.210103   84617 system_pods.go:89] "storage-provisioner" [0703c5fb-af24-47d4-b84e-df39146cb0c2] Running
	I1025 19:43:08.210109   84617 system_pods.go:126] duration metric: took 1m8.692223366s to wait for k8s-apps to be running ...
	I1025 19:43:08.210114   84617 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 19:43:08.210161   84617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 19:43:08.219164   84617 system_svc.go:56] duration metric: took 9.043655ms WaitForService to wait for kubelet.
	I1025 19:43:08.219177   84617 kubeadm.go:581] duration metric: took 1m10.546780605s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 19:43:08.219190   84617 node_conditions.go:102] verifying NodePressure condition ...
	I1025 19:43:08.221310   84617 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1025 19:43:08.221324   84617 node_conditions.go:123] node cpu capacity is 2
	I1025 19:43:08.221331   84617 node_conditions.go:105] duration metric: took 2.137545ms to run NodePressure ...
	I1025 19:43:08.221352   84617 start.go:228] waiting for startup goroutines ...
	I1025 19:43:08.221359   84617 start.go:233] waiting for cluster config update ...
	I1025 19:43:08.221371   84617 start.go:242] writing updated cluster config ...
	I1025 19:43:08.221711   84617 ssh_runner.go:195] Run: rm -f paused
	I1025 19:43:08.262162   84617 start.go:600] kubectl: 1.27.2, cluster: 1.16.0 (minor skew: 11)
	I1025 19:43:08.282950   84617 out.go:177] 
	W1025 19:43:08.319928   84617 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1025 19:43:08.356653   84617 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1025 19:43:08.435665   84617 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-159000" cluster and "default" namespace by default
	I1025 19:43:08.321547   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:10.817458   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:12.817811   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:14.817900   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	I1025 19:43:17.316746   84989 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2k2fv" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-10-26 02:35:29 UTC, ends at Thu 2023-10-26 02:43:19 UTC. --
	Oct 26 02:42:12 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:12.640135208Z" level=info msg="ignoring event" container=81465552c08978d5b4620546445ceeacba6decb8896112eb9bbe79e86591c74a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 02:42:12 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:12.640875690Z" level=warning msg="cleaning up after shim disconnected" id=81465552c08978d5b4620546445ceeacba6decb8896112eb9bbe79e86591c74a namespace=moby
	Oct 26 02:42:12 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:12.640923112Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 26 02:42:19 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:19.122657350Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:19 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:19.123006897Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:19 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:19.124427303Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.161040119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.161127590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.161160900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.161182874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:30.438105104Z" level=info msg="ignoring event" container=61e404e77b6e4e4aa1fc4a45f1c757687109e89c8b22a18d03020c526cf0375d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.438490695Z" level=info msg="shim disconnected" id=61e404e77b6e4e4aa1fc4a45f1c757687109e89c8b22a18d03020c526cf0375d namespace=moby
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.438539919Z" level=warning msg="cleaning up after shim disconnected" id=61e404e77b6e4e4aa1fc4a45f1c757687109e89c8b22a18d03020c526cf0375d namespace=moby
	Oct 26 02:42:30 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:42:30.438548920Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 26 02:42:43 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:43.124534150Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:43 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:43.124859518Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:43 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:42:43.125900156Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.175883404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.175984239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.175999194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.176044583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1169]: time="2023-10-26T02:43:02.466211409Z" level=info msg="ignoring event" container=77f6da3e210230c3a1497ec73716a257289b967ab7ee978d1cf391ac1dea5a67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.466977438Z" level=info msg="shim disconnected" id=77f6da3e210230c3a1497ec73716a257289b967ab7ee978d1cf391ac1dea5a67 namespace=moby
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.467359753Z" level=warning msg="cleaning up after shim disconnected" id=77f6da3e210230c3a1497ec73716a257289b967ab7ee978d1cf391ac1dea5a67 namespace=moby
	Oct 26 02:43:02 old-k8s-version-159000 dockerd[1175]: time="2023-10-26T02:43:02.467404881Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	77f6da3e2102   a90209bb39e3             "nginx -g 'daemon of…"   17 seconds ago       Exited (1) 17 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard_dd44af97-1919-4e1f-bbf5-ad3737fa07c8_3
	9d87ba3bfbe8   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-bxbts_kubernetes-dashboard_ec39e599-10b1-499e-96c4-6fc6a8db22f8_0
	de53ea9af34f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard_dd44af97-1919-4e1f-bbf5-ad3737fa07c8_0
	3171846920f0   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-4ljwz_kube-system_3113f557-b6a5-4908-ba42-8d109d0c1ae0_0
	5c860d02c378   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-bxbts_kubernetes-dashboard_ec39e599-10b1-499e-96c4-6fc6a8db22f8_0
	76c0ffd7c5f4   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_0703c5fb-af24-47d4-b84e-df39146cb0c2_0
	b8518dedc090   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_0703c5fb-af24-47d4-b84e-df39146cb0c2_0
	218cb8e92d8c   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-bwx2v_kube-system_4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8_0
	c60ab9ee8497   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-bwx2v_kube-system_4ebd0c8f-a11f-4b9a-9dd6-b7cf10bd97e8_0
	09481cb07602   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-flhf6_kube-system_ba552d17-1dd8-484c-8ac9-f95b6e1dca83_0
	5914155ee856   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-flhf6_kube-system_ba552d17-1dd8-484c-8ac9-f95b6e1dca83_0
	749a2ebf9660   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-159000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	10cd3188b8bb   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-159000_kube-system_54c5e35b90e99a312c409fa6f5104a39_0
	1d33270d0ae6   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-159000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	a31d5459615a   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-159000_kube-system_23f95de21a46dd48dc47c1ead5980a14_0
	cc2bcddd2fb8   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-159000_kube-system_54c5e35b90e99a312c409fa6f5104a39_0
	065c81056573   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-159000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	6df81cd2598b   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-159000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	4894399696cd   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-159000_kube-system_23f95de21a46dd48dc47c1ead5980a14_0
	time="2023-10-26T02:43:19Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [218cb8e92d8c] <==
	* .:53
	2023-10-26T02:41:58.942Z [INFO] plugin/reload: Running configuration MD5 = 959c9ea6bb427084e0d0801a2a783244
	2023-10-26T02:41:58.943Z [INFO] CoreDNS-1.6.2
	2023-10-26T02:41:58.943Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-26T02:41:58.946Z [INFO] 127.0.0.1:35638 - 27234 "HINFO IN 1876671306407668287.2411433651479286705. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.003549214s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-159000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-159000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af1d352f1030f8f3ea7f97e311e7fe82ef319942
	                    minikube.k8s.io/name=old-k8s-version-159000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T19_41_42_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 26 Oct 2023 02:41:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 26 Oct 2023 02:42:37 +0000   Thu, 26 Oct 2023 02:41:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 26 Oct 2023 02:42:37 +0000   Thu, 26 Oct 2023 02:41:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 26 Oct 2023 02:42:37 +0000   Thu, 26 Oct 2023 02:41:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 26 Oct 2023 02:42:37 +0000   Thu, 26 Oct 2023 02:41:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.87.26
	  Hostname:    old-k8s-version-159000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	System Info:
	 Machine ID:                 ad36a786b662440d9baee457383215af
	 System UUID:                f75211ee-0000-0000-9635-149d997fca88
	 Boot ID:                    1d8ef99f-3f89-422b-a2ab-628c9ec44efd
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-bwx2v                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     82s
	  kube-system                etcd-old-k8s-version-159000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                kube-apiserver-old-k8s-version-159000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                kube-controller-manager-old-k8s-version-159000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                kube-proxy-flhf6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                kube-scheduler-old-k8s-version-159000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                metrics-server-74d5856cc6-4ljwz                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-zsfpj         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-bxbts             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet, old-k8s-version-159000     Node old-k8s-version-159000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet, old-k8s-version-159000     Node old-k8s-version-159000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet, old-k8s-version-159000     Node old-k8s-version-159000 status is now: NodeHasSufficientPID
	  Normal  Starting                 82s                  kube-proxy, old-k8s-version-159000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.028088] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.014608] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000000] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007232] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.373717] systemd-fstab-generator[125]: Ignoring "noauto" for root device
	[  +0.042128] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.877596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.965142] systemd-fstab-generator[512]: Ignoring "noauto" for root device
	[  +0.084180] systemd-fstab-generator[523]: Ignoring "noauto" for root device
	[  +0.738481] systemd-fstab-generator[776]: Ignoring "noauto" for root device
	[  +0.213435] systemd-fstab-generator[814]: Ignoring "noauto" for root device
	[  +0.088012] systemd-fstab-generator[825]: Ignoring "noauto" for root device
	[  +0.095436] systemd-fstab-generator[838]: Ignoring "noauto" for root device
	[  +5.996196] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +1.857930] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.422218] systemd-fstab-generator[1611]: Ignoring "noauto" for root device
	[Oct26 02:36] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.090348] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +23.560971] kauditd_printk_skb: 5 callbacks suppressed
	[Oct26 02:41] systemd-fstab-generator[6856]: Ignoring "noauto" for root device
	[Oct26 02:42] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.726797] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	
	* 
	* ==> etcd [10cd3188b8bb] <==
	* 2023-10-26 02:41:33.793100 I | raft: c1817836cff3f8fe became follower at term 0
	2023-10-26 02:41:33.793109 I | raft: newRaft c1817836cff3f8fe [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-26 02:41:33.793111 I | raft: c1817836cff3f8fe became follower at term 1
	2023-10-26 02:41:33.937800 W | auth: simple token is not cryptographically signed
	2023-10-26 02:41:34.014389 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-26 02:41:34.109584 I | etcdserver: c1817836cff3f8fe as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-26 02:41:34.117330 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-26 02:41:34.172845 I | embed: listening for metrics on http://192.168.87.26:2381
	2023-10-26 02:41:34.177538 I | etcdserver/membership: added member c1817836cff3f8fe [https://192.168.87.26:2380] to cluster 4e64e4d30197bf76
	2023-10-26 02:41:34.177583 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-26 02:41:34.893635 I | raft: c1817836cff3f8fe is starting a new election at term 1
	2023-10-26 02:41:34.894047 I | raft: c1817836cff3f8fe became candidate at term 2
	2023-10-26 02:41:34.894375 I | raft: c1817836cff3f8fe received MsgVoteResp from c1817836cff3f8fe at term 2
	2023-10-26 02:41:34.894484 I | raft: c1817836cff3f8fe became leader at term 2
	2023-10-26 02:41:34.894745 I | raft: raft.node: c1817836cff3f8fe elected leader c1817836cff3f8fe at term 2
	2023-10-26 02:41:34.895092 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-26 02:41:34.895976 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-26 02:41:34.896042 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-26 02:41:34.896062 I | etcdserver: published {Name:old-k8s-version-159000 ClientURLs:[https://192.168.87.26:2379]} to cluster 4e64e4d30197bf76
	2023-10-26 02:41:34.896376 I | embed: ready to serve client requests
	2023-10-26 02:41:34.896538 I | embed: ready to serve client requests
	2023-10-26 02:41:34.897498 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-26 02:41:34.897654 I | embed: serving client requests on 192.168.87.26:2379
	2023-10-26 02:41:59.560054 W | etcdserver: request "header:<ID:17941931252265936157 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-84b68f675b\" mod_revision:400 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-84b68f675b\" value_size:1249 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-84b68f675b\" > >>" with result "size:16" took too long (134.60509ms) to execute
	2023-10-26 02:41:59.560760 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:3 size:5842" took too long (130.086123ms) to execute
	
	* 
	* ==> kernel <==
	*  02:43:20 up 7 min,  0 users,  load average: 0.42, 0.35, 0.15
	Linux old-k8s-version-159000 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a31d5459615a] <==
	* I1026 02:41:38.144911       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1026 02:41:38.151063       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1026 02:41:38.154228       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1026 02:41:38.154281       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1026 02:41:38.906682       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 02:41:39.928716       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 02:41:40.207824       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1026 02:41:40.511143       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.87.26]
	I1026 02:41:40.511747       1 controller.go:606] quota admission added evaluator for: endpoints
	I1026 02:41:41.430344       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1026 02:41:42.036246       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1026 02:41:42.300461       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1026 02:41:57.731073       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1026 02:41:57.882018       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1026 02:41:57.953197       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1026 02:42:00.493407       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1026 02:42:00.493448       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:42:00.493528       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 02:42:00.493536       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 02:43:00.494529       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1026 02:43:00.494596       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 02:43:00.494746       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1026 02:43:00.494883       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [749a2ebf9660] <==
	* I1026 02:41:59.161278       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"b0f9056c-0d12-4b57-869d-77d05ea4163c", APIVersion:"apps/v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	I1026 02:41:59.161163       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"599278fe-3251-4d32-8531-68eb41c35053", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.187765       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.188211       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"599278fe-3251-4d32-8531-68eb41c35053", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.188472       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"396", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.193512       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.193664       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"599278fe-3251-4d32-8531-68eb41c35053", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.196663       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.203996       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.204165       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.214808       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.214964       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.217784       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.217796       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"599278fe-3251-4d32-8531-68eb41c35053", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.238050       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.238269       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1026 02:41:59.288762       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.288891       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1026 02:41:59.371366       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"88086eb1-19a4-47f1-83c5-d1b4bd7304e1", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-bxbts
	I1026 02:42:00.050093       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"7e47ffcd-5038-4603-8210-b770feae363b", APIVersion:"apps/v1", ResourceVersion:"366", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-4ljwz
	I1026 02:42:00.302406       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"599278fe-3251-4d32-8531-68eb41c35053", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-zsfpj
	E1026 02:42:28.273098       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1026 02:42:30.023401       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1026 02:42:58.525745       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1026 02:43:02.025766       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [09481cb07602] <==
	* W1026 02:41:58.634671       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1026 02:41:58.643423       1 node.go:135] Successfully retrieved node IP: 192.168.87.26
	I1026 02:41:58.643443       1 server_others.go:149] Using iptables Proxier.
	I1026 02:41:58.644099       1 server.go:529] Version: v1.16.0
	I1026 02:41:58.655648       1 config.go:131] Starting endpoints config controller
	I1026 02:41:58.655670       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1026 02:41:58.655851       1 config.go:313] Starting service config controller
	I1026 02:41:58.655860       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1026 02:41:58.758785       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1026 02:41:58.758824       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1d33270d0ae6] <==
	* W1026 02:41:37.224871       1 authentication.go:79] Authentication is disabled
	I1026 02:41:37.224879       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1026 02:41:37.225329       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1026 02:41:37.263857       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 02:41:37.265512       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 02:41:37.265671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 02:41:37.266356       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 02:41:37.266381       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 02:41:37.267590       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 02:41:37.267622       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 02:41:37.267646       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 02:41:37.267666       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 02:41:37.267683       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 02:41:37.270556       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 02:41:38.266223       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 02:41:38.269290       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 02:41:38.271466       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1026 02:41:38.273391       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 02:41:38.274739       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 02:41:38.276312       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 02:41:38.277397       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 02:41:38.278937       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1026 02:41:38.280062       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 02:41:38.281355       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 02:41:38.284121       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-10-26 02:35:29 UTC, ends at Thu 2023-10-26 02:43:20 UTC. --
	Oct 26 02:42:14 old-k8s-version-159000 kubelet[6862]: E1026 02:42:14.302151    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:42:15 old-k8s-version-159000 kubelet[6862]: E1026 02:42:15.399756    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:42:19 old-k8s-version-159000 kubelet[6862]: E1026 02:42:19.124895    6862 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:19 old-k8s-version-159000 kubelet[6862]: E1026 02:42:19.125164    6862 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:19 old-k8s-version-159000 kubelet[6862]: E1026 02:42:19.125251    6862 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:19 old-k8s-version-159000 kubelet[6862]: E1026 02:42:19.125307    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:30 old-k8s-version-159000 kubelet[6862]: W1026 02:42:30.398346    6862 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-zsfpj through plugin: invalid network status for
	Oct 26 02:42:31 old-k8s-version-159000 kubelet[6862]: W1026 02:42:31.425773    6862 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-zsfpj through plugin: invalid network status for
	Oct 26 02:42:31 old-k8s-version-159000 kubelet[6862]: E1026 02:42:31.429968    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:42:32 old-k8s-version-159000 kubelet[6862]: E1026 02:42:32.132033    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 02:42:32 old-k8s-version-159000 kubelet[6862]: W1026 02:42:32.436132    6862 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-zsfpj through plugin: invalid network status for
	Oct 26 02:42:35 old-k8s-version-159000 kubelet[6862]: E1026 02:42:35.400626    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:42:43 old-k8s-version-159000 kubelet[6862]: E1026 02:42:43.126374    6862 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:43 old-k8s-version-159000 kubelet[6862]: E1026 02:42:43.126416    6862 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:43 old-k8s-version-159000 kubelet[6862]: E1026 02:42:43.126447    6862 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host
	Oct 26 02:42:43 old-k8s-version-159000 kubelet[6862]: E1026 02:42:43.126466    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.87.1:53: no such host"
	Oct 26 02:42:48 old-k8s-version-159000 kubelet[6862]: E1026 02:42:48.118276    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:42:54 old-k8s-version-159000 kubelet[6862]: E1026 02:42:54.119207    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 02:43:02 old-k8s-version-159000 kubelet[6862]: W1026 02:43:02.610008    6862 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-zsfpj through plugin: invalid network status for
	Oct 26 02:43:02 old-k8s-version-159000 kubelet[6862]: E1026 02:43:02.615062    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:43:03 old-k8s-version-159000 kubelet[6862]: W1026 02:43:03.623403    6862 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-zsfpj through plugin: invalid network status for
	Oct 26 02:43:05 old-k8s-version-159000 kubelet[6862]: E1026 02:43:05.399467    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	Oct 26 02:43:07 old-k8s-version-159000 kubelet[6862]: E1026 02:43:07.119460    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 02:43:19 old-k8s-version-159000 kubelet[6862]: E1026 02:43:19.117753    6862 pod_workers.go:191] Error syncing pod 3113f557-b6a5-4908-ba42-8d109d0c1ae0 ("metrics-server-74d5856cc6-4ljwz_kube-system(3113f557-b6a5-4908-ba42-8d109d0c1ae0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 26 02:43:20 old-k8s-version-159000 kubelet[6862]: E1026 02:43:20.117388    6862 pod_workers.go:191] Error syncing pod dd44af97-1919-4e1f-bbf5-ad3737fa07c8 ("dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-zsfpj_kubernetes-dashboard(dd44af97-1919-4e1f-bbf5-ad3737fa07c8)"
	
	* 
	* ==> kubernetes-dashboard [9d87ba3bfbe8] <==
	* 2023/10/26 02:42:05 Using namespace: kubernetes-dashboard
	2023/10/26 02:42:05 Using in-cluster config to connect to apiserver
	2023/10/26 02:42:05 Using secret token for csrf signing
	2023/10/26 02:42:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/26 02:42:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/26 02:42:05 Successful initial request to the apiserver, version: v1.16.0
	2023/10/26 02:42:05 Generating JWE encryption key
	2023/10/26 02:42:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/26 02:42:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/26 02:42:06 Initializing JWE encryption key from synchronized object
	2023/10/26 02:42:06 Creating in-cluster Sidecar client
	2023/10/26 02:42:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/26 02:42:06 Serving insecurely on HTTP port: 9090
	2023/10/26 02:42:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/26 02:43:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/26 02:42:05 Starting overwatch
	
	* 
	* ==> storage-provisioner [76c0ffd7c5f4] <==
	* I1026 02:41:59.753802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 02:41:59.760603       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 02:41:59.760793       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 02:41:59.767279       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 02:41:59.767565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"36ce8890-498f-47a5-931b-f9c68d86d2d7", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-159000_d514d87e-e483-434d-98a3-f73f33b2c697 became leader
	I1026 02:41:59.767941       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-159000_d514d87e-e483-434d-98a3-f73f33b2c697!
	I1026 02:41:59.869374       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-159000_d514d87e-e483-434d-98a3-f73f33b2c697!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-159000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-4ljwz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-159000 describe pod metrics-server-74d5856cc6-4ljwz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-159000 describe pod metrics-server-74d5856cc6-4ljwz: exit status 1 (49.835029ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-4ljwz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-159000 describe pod metrics-server-74d5856cc6-4ljwz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.76s)

                                                
                                    

Test pass (300/322)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 35.27
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.28.3/json-events 40.91
11 TestDownloadOnly/v1.28.3/preload-exists 0
14 TestDownloadOnly/v1.28.3/kubectl 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.32
16 TestDownloadOnly/DeleteAll 0.41
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
19 TestBinaryMirror 0.98
20 TestOffline 53.64
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
25 TestAddons/Setup 156.38
27 TestAddons/parallel/Registry 14.67
28 TestAddons/parallel/Ingress 20.81
29 TestAddons/parallel/InspektorGadget 10.56
30 TestAddons/parallel/MetricsServer 5.55
31 TestAddons/parallel/HelmTiller 10.02
33 TestAddons/parallel/CSI 84.28
34 TestAddons/parallel/Headlamp 13.25
35 TestAddons/parallel/CloudSpanner 5.4
36 TestAddons/parallel/LocalPath 10.06
37 TestAddons/parallel/NvidiaDevicePlugin 5.44
40 TestAddons/serial/GCPAuth/Namespaces 0.1
41 TestAddons/StoppedEnableDisable 5.79
42 TestCertOptions 37.69
43 TestCertExpiration 241.53
44 TestDockerFlags 43.83
45 TestForceSystemdFlag 38.79
46 TestForceSystemdEnv 39.82
49 TestHyperKitDriverInstallOrUpdate 6.54
52 TestErrorSpam/setup 35.18
53 TestErrorSpam/start 1.57
54 TestErrorSpam/status 0.5
55 TestErrorSpam/pause 1.28
56 TestErrorSpam/unpause 1.32
57 TestErrorSpam/stop 3.69
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 51.87
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 41.41
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.06
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.62
69 TestFunctional/serial/CacheCmd/cache/add_local 1.61
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
75 TestFunctional/serial/MinikubeKubectlCmd 0.56
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.8
77 TestFunctional/serial/ExtraConfig 37.64
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 3.29
80 TestFunctional/serial/LogsFileCmd 3.19
81 TestFunctional/serial/InvalidService 4.98
83 TestFunctional/parallel/ConfigCmd 0.55
84 TestFunctional/parallel/DashboardCmd 18.29
85 TestFunctional/parallel/DryRun 1.28
86 TestFunctional/parallel/InternationalLanguage 0.8
87 TestFunctional/parallel/StatusCmd 0.48
91 TestFunctional/parallel/ServiceCmdConnect 7.6
92 TestFunctional/parallel/AddonsCmd 0.27
93 TestFunctional/parallel/PersistentVolumeClaim 26.71
95 TestFunctional/parallel/SSHCmd 0.3
96 TestFunctional/parallel/CpCmd 0.68
97 TestFunctional/parallel/MySQL 27.51
98 TestFunctional/parallel/FileSync 0.2
99 TestFunctional/parallel/CertSync 1.22
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.15
107 TestFunctional/parallel/License 0.48
108 TestFunctional/parallel/Version/short 0.12
109 TestFunctional/parallel/Version/components 0.45
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.18
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
114 TestFunctional/parallel/ImageCommands/ImageBuild 2.12
115 TestFunctional/parallel/ImageCommands/Setup 2.82
116 TestFunctional/parallel/DockerEnv/bash 0.8
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.24
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.44
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.25
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.52
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.41
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
127 TestFunctional/parallel/ServiceCmd/DeployApp 13.13
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
133 TestFunctional/parallel/ServiceCmd/List 0.39
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
136 TestFunctional/parallel/ServiceCmd/Format 0.25
137 TestFunctional/parallel/ServiceCmd/URL 0.25
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
145 TestFunctional/parallel/ProfileCmd/profile_list 0.28
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
147 TestFunctional/parallel/MountCmd/any-port 6.04
148 TestFunctional/parallel/MountCmd/specific-port 1.48
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
150 TestFunctional/delete_addon-resizer_images 0.14
151 TestFunctional/delete_my-image_image 0.05
152 TestFunctional/delete_minikube_cached_images 0.05
156 TestImageBuild/serial/Setup 35.83
157 TestImageBuild/serial/NormalBuild 1.25
158 TestImageBuild/serial/BuildWithBuildArg 0.72
159 TestImageBuild/serial/BuildWithDockerIgnore 0.24
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
163 TestIngressAddonLegacy/StartLegacyK8sCluster 72.5
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.3
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 41.2
170 TestJSONOutput/start/Command 49.32
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.46
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.46
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 8.17
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.84
198 TestMainNoArgs 0.08
199 TestMinikubeProfile 84.52
202 TestMountStart/serial/StartWithMountFirst 16.31
203 TestMountStart/serial/VerifyMountFirst 0.32
204 TestMountStart/serial/StartWithMountSecond 16.48
205 TestMountStart/serial/VerifyMountSecond 0.33
206 TestMountStart/serial/DeleteFirst 2.38
207 TestMountStart/serial/VerifyMountPostDelete 0.32
208 TestMountStart/serial/Stop 2.25
209 TestMountStart/serial/RestartStopped 16.41
210 TestMountStart/serial/VerifyMountPostStop 0.31
213 TestMultiNode/serial/FreshStart2Nodes 95.96
214 TestMultiNode/serial/DeployApp2Nodes 4.48
215 TestMultiNode/serial/PingHostFrom2Pods 0.97
216 TestMultiNode/serial/AddNode 32.6
217 TestMultiNode/serial/ProfileList 0.22
218 TestMultiNode/serial/CopyFile 5.69
219 TestMultiNode/serial/StopNode 2.74
220 TestMultiNode/serial/StartAfterStop 27.23
221 TestMultiNode/serial/RestartKeepsNodes 174.87
222 TestMultiNode/serial/DeleteNode 2.99
223 TestMultiNode/serial/StopMultiNode 16.49
224 TestMultiNode/serial/RestartMultiNode 111.98
225 TestMultiNode/serial/ValidateNameConflict 45.83
229 TestPreload 174.29
231 TestScheduledStopUnix 105.09
232 TestSkaffold 109.67
237 TestKubernetesUpgrade 149.7
250 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.25
251 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.17
252 TestStoppedBinaryUpgrade/Setup 0.71
253 TestStoppedBinaryUpgrade/Upgrade 165.01
255 TestPause/serial/Start 49.87
256 TestPause/serial/SecondStartNoReconfiguration 39.18
257 TestPause/serial/Pause 0.56
258 TestPause/serial/VerifyStatus 0.16
259 TestPause/serial/Unpause 0.5
260 TestPause/serial/PauseAgain 0.58
261 TestPause/serial/DeletePaused 5.28
262 TestPause/serial/VerifyDeletedResources 0.83
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.74
272 TestNoKubernetes/serial/StartWithK8s 39.07
273 TestStoppedBinaryUpgrade/MinikubeLogs 2.51
274 TestNetworkPlugins/group/auto/Start 57.7
275 TestNoKubernetes/serial/StartWithStopK8s 16.41
276 TestNoKubernetes/serial/Start 15.69
277 TestNetworkPlugins/group/auto/KubeletFlags 0.15
278 TestNetworkPlugins/group/auto/NetCatPod 12.23
279 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
280 TestNoKubernetes/serial/ProfileList 0.51
281 TestNoKubernetes/serial/Stop 2.26
282 TestNoKubernetes/serial/StartNoArgs 15.07
283 TestNetworkPlugins/group/auto/DNS 0.12
284 TestNetworkPlugins/group/auto/Localhost 0.12
285 TestNetworkPlugins/group/auto/HairPin 0.11
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
287 TestNetworkPlugins/group/kindnet/Start 58.88
288 TestNetworkPlugins/group/calico/Start 77.57
289 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
290 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
291 TestNetworkPlugins/group/kindnet/NetCatPod 14.21
292 TestNetworkPlugins/group/kindnet/DNS 0.13
293 TestNetworkPlugins/group/kindnet/Localhost 0.11
294 TestNetworkPlugins/group/kindnet/HairPin 0.11
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.17
297 TestNetworkPlugins/group/calico/NetCatPod 15.24
298 TestNetworkPlugins/group/custom-flannel/Start 59.07
299 TestNetworkPlugins/group/calico/DNS 0.13
300 TestNetworkPlugins/group/calico/Localhost 0.1
301 TestNetworkPlugins/group/calico/HairPin 0.11
302 TestNetworkPlugins/group/false/Start 50.47
303 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.19
304 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.22
305 TestNetworkPlugins/group/custom-flannel/DNS 0.12
306 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
307 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
308 TestNetworkPlugins/group/false/KubeletFlags 0.16
309 TestNetworkPlugins/group/false/NetCatPod 17.21
310 TestNetworkPlugins/group/enable-default-cni/Start 49.27
311 TestNetworkPlugins/group/false/DNS 0.14
312 TestNetworkPlugins/group/false/Localhost 0.11
313 TestNetworkPlugins/group/false/HairPin 0.12
314 TestNetworkPlugins/group/flannel/Start 58.75
315 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
316 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.22
317 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
318 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
319 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
320 TestNetworkPlugins/group/bridge/Start 51.99
321 TestNetworkPlugins/group/flannel/ControllerPod 5.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
323 TestNetworkPlugins/group/flannel/NetCatPod 12.21
324 TestNetworkPlugins/group/flannel/DNS 0.14
325 TestNetworkPlugins/group/flannel/Localhost 0.11
326 TestNetworkPlugins/group/flannel/HairPin 0.11
327 TestNetworkPlugins/group/kubenet/Start 87.99
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
329 TestNetworkPlugins/group/bridge/NetCatPod 11.28
330 TestNetworkPlugins/group/bridge/DNS 0.13
331 TestNetworkPlugins/group/bridge/Localhost 0.11
332 TestNetworkPlugins/group/bridge/HairPin 0.1
334 TestStartStop/group/old-k8s-version/serial/FirstStart 129.82
335 TestNetworkPlugins/group/kubenet/KubeletFlags 0.17
336 TestNetworkPlugins/group/kubenet/NetCatPod 14.22
337 TestNetworkPlugins/group/kubenet/DNS 0.12
338 TestNetworkPlugins/group/kubenet/Localhost 0.11
339 TestNetworkPlugins/group/kubenet/HairPin 0.11
341 TestStartStop/group/no-preload/serial/FirstStart 56.5
342 TestStartStop/group/old-k8s-version/serial/DeployApp 9.3
343 TestStartStop/group/no-preload/serial/DeployApp 9.29
344 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
345 TestStartStop/group/old-k8s-version/serial/Stop 8.29
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
347 TestStartStop/group/no-preload/serial/Stop 8.26
348 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
349 TestStartStop/group/old-k8s-version/serial/SecondStart 466.75
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
351 TestStartStop/group/no-preload/serial/SecondStart 307.37
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
353 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
354 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
355 TestStartStop/group/no-preload/serial/Pause 1.92
357 TestStartStop/group/embed-certs/serial/FirstStart 50.7
358 TestStartStop/group/embed-certs/serial/DeployApp 8.29
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
360 TestStartStop/group/embed-certs/serial/Stop 8.3
361 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
362 TestStartStop/group/embed-certs/serial/SecondStart 299.42
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
366 TestStartStop/group/old-k8s-version/serial/Pause 1.74
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.2
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.86
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.27
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 297.52
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
377 TestStartStop/group/embed-certs/serial/Pause 1.86
379 TestStartStop/group/newest-cni/serial/FirstStart 46.99
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
382 TestStartStop/group/newest-cni/serial/Stop 8.25
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
384 TestStartStop/group/newest-cni/serial/SecondStart 37.65
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
388 TestStartStop/group/newest-cni/serial/Pause 1.79
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.02
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.86
x
+
TestDownloadOnly/v1.16.0/json-events (35.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (35.270481206s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (35.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-430000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-430000: exit status 85 (316.363142ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-430000 | jenkins | v1.31.2 | 25 Oct 23 18:44 PDT |          |
	|         | -p download-only-430000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:44:56
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:44:56.178897   77292 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:44:56.179199   77292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:44:56.179205   77292 out.go:309] Setting ErrFile to fd 2...
	I1025 18:44:56.179209   77292 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:44:56.179398   77292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	W1025 18:44:56.179503   77292 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17491-76819/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17491-76819/.minikube/config/config.json: no such file or directory
	I1025 18:44:56.181160   77292 out.go:303] Setting JSON to true
	I1025 18:44:56.203773   77292 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":35064,"bootTime":1698249632,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 18:44:56.203871   77292 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:44:56.228285   77292 out.go:97] [download-only-430000] minikube v1.31.2 on Darwin 14.0
	I1025 18:44:56.250588   77292 out.go:169] MINIKUBE_LOCATION=17491
	I1025 18:44:56.228549   77292 notify.go:220] Checking for updates...
	W1025 18:44:56.228544   77292 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 18:44:56.294195   77292 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 18:44:56.336320   77292 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:44:56.378521   77292 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:44:56.400677   77292 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	W1025 18:44:56.443360   77292 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 18:44:56.443853   77292 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:44:56.474596   77292 out.go:97] Using the hyperkit driver based on user configuration
	I1025 18:44:56.474663   77292 start.go:298] selected driver: hyperkit
	I1025 18:44:56.474676   77292 start.go:902] validating driver "hyperkit" against <nil>
	I1025 18:44:56.474912   77292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:44:56.475134   77292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17491-76819/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1025 18:44:56.613242   77292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1025 18:44:56.617452   77292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:44:56.617474   77292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1025 18:44:56.617509   77292 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 18:44:56.621084   77292 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1025 18:44:56.621244   77292 start_flags.go:916] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 18:44:56.621301   77292 cni.go:84] Creating CNI manager for ""
	I1025 18:44:56.621317   77292 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:44:56.621330   77292 start_flags.go:323] config:
	{Name:download-only-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-430000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:44:56.621593   77292 iso.go:125] acquiring lock: {Name:mk28dd82d77e5b41d6d5779f6c9eefa1a75d61e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:44:56.643467   77292 out.go:97] Downloading VM boot image ...
	I1025 18:44:56.643583   77292 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1025 18:45:00.768505   77292 out.go:97] Starting control plane node download-only-430000 in cluster download-only-430000
	I1025 18:45:00.768540   77292 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:45:00.829732   77292 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 18:45:00.829767   77292 cache.go:56] Caching tarball of preloaded images
	I1025 18:45:00.830120   77292 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:45:00.850390   77292 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 18:45:00.850437   77292 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:00.929841   77292 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 18:45:05.841017   77292 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:05.841192   77292 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:06.392627   77292 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 18:45:06.392866   77292 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/download-only-430000/config.json ...
	I1025 18:45:06.392889   77292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/download-only-430000/config.json: {Name:mk6ae9ec2c73df5d196051f28f1d481a3846615c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:45:06.393186   77292 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 18:45:06.393459   77292 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-430000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (40.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=hyperkit : (40.907930114s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (40.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
--- PASS: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-430000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-430000: exit status 85 (319.211962ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-430000 | jenkins | v1.31.2 | 25 Oct 23 18:44 PDT |          |
	|         | -p download-only-430000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-430000 | jenkins | v1.31.2 | 25 Oct 23 18:45 PDT |          |
	|         | -p download-only-430000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 18:45:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.21.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:45:31.769930   77311 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:45:31.770176   77311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:31.770195   77311 out.go:309] Setting ErrFile to fd 2...
	I1025 18:45:31.770199   77311 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:31.770381   77311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	W1025 18:45:31.770471   77311 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17491-76819/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17491-76819/.minikube/config/config.json: no such file or directory
	I1025 18:45:31.771689   77311 out.go:303] Setting JSON to true
	I1025 18:45:31.794495   77311 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":35099,"bootTime":1698249632,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 18:45:31.794609   77311 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:45:31.816025   77311 out.go:97] [download-only-430000] minikube v1.31.2 on Darwin 14.0
	I1025 18:45:31.837691   77311 out.go:169] MINIKUBE_LOCATION=17491
	I1025 18:45:31.816234   77311 notify.go:220] Checking for updates...
	I1025 18:45:31.879756   77311 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 18:45:31.900703   77311 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:45:31.921849   77311 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:45:31.942847   77311 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	W1025 18:45:31.986696   77311 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 18:45:31.987381   77311 config.go:182] Loaded profile config "download-only-430000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 18:45:31.987477   77311 start.go:810] api.Load failed for download-only-430000: filestore "download-only-430000": Docker machine "download-only-430000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 18:45:31.987632   77311 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 18:45:31.987667   77311 start.go:810] api.Load failed for download-only-430000: filestore "download-only-430000": Docker machine "download-only-430000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 18:45:32.016765   77311 out.go:97] Using the hyperkit driver based on existing profile
	I1025 18:45:32.016835   77311 start.go:298] selected driver: hyperkit
	I1025 18:45:32.016845   77311 start.go:902] validating driver "hyperkit" against &{Name:download-only-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-430000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:45:32.017144   77311 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:45:32.017316   77311 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17491-76819/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1025 18:45:32.026960   77311 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1025 18:45:32.030843   77311 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:45:32.030865   77311 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1025 18:45:32.033639   77311 cni.go:84] Creating CNI manager for ""
	I1025 18:45:32.033660   77311 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:45:32.033675   77311 start_flags.go:323] config:
	{Name:download-only-430000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-430000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:45:32.033823   77311 iso.go:125] acquiring lock: {Name:mk28dd82d77e5b41d6d5779f6c9eefa1a75d61e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:45:32.054744   77311 out.go:97] Starting control plane node download-only-430000 in cluster download-only-430000
	I1025 18:45:32.054769   77311 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:45:32.109753   77311 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:45:32.109822   77311 cache.go:56] Caching tarball of preloaded images
	I1025 18:45:32.110890   77311 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:45:32.134332   77311 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 18:45:32.134373   77311 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:32.216159   77311 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 18:45:38.002147   77311 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:38.002350   77311 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 18:45:38.635867   77311 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 18:45:38.635944   77311 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/download-only-430000/config.json ...
	I1025 18:45:38.636318   77311 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 18:45:38.636530   77311 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/darwin/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-430000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-430000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.98s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-856000 --alsologtostderr --binary-mirror http://127.0.0.1:50667 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-856000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-856000
--- PASS: TestBinaryMirror (0.98s)

                                                
                                    
x
+
TestOffline (53.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-951000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-951000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (48.341377185s)
helpers_test.go:175: Cleaning up "offline-docker-951000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-951000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-951000: (5.301612876s)
--- PASS: TestOffline (53.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-112000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-112000: exit status 85 (194.636956ms)

                                                
                                                
-- stdout --
	* Profile "addons-112000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-112000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-112000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-112000: exit status 85 (213.790565ms)

                                                
                                                
-- stdout --
	* Profile "addons-112000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-112000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (156.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-112000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-112000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m36.383583225s)
--- PASS: TestAddons/Setup (156.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 13.691317ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-4zd2s" [be919a48-2d1a-421c-95d7-d4c37945d6c4] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00893626s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cxg5t" [2ed8096b-8e45-47d9-8a92-4d525c587b7b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008917037s
addons_test.go:339: (dbg) Run:  kubectl --context addons-112000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-112000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-112000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.920184961s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 ip
2023/10/25 18:49:05 [DEBUG] GET http://192.168.85.75:5000
addons_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-112000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-112000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-112000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5647e53b-e314-41e4-ad9a-c39659953aeb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5647e53b-e314-41e4-ad9a-c39659953aeb] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010377091s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-112000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.85.75
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p addons-112000 addons disable ingress --alsologtostderr -v=1: (7.51659275s)
--- PASS: TestAddons/parallel/Ingress (20.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.56s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dz2m7" [03d1b982-b857-4582-8f92-e4869cbac806] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010170939s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-112000
addons_test.go:840: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-112000: (5.551582214s)
--- PASS: TestAddons/parallel/InspektorGadget (10.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 2.718811ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-ljg7j" [74f2c4b0-e651-4288-ba12-432e86e5582b] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011797403s
addons_test.go:414: (dbg) Run:  kubectl --context addons-112000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.02s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 2.616507ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-f9v42" [08cbe026-7484-4521-a6a5-134652c56500] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011296864s
addons_test.go:472: (dbg) Run:  kubectl --context addons-112000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-112000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.550100609s)
addons_test.go:489: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (84.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 14.110207ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-112000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-112000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1efe51fe-9606-4c4c-b946-509f535b133c] Pending
helpers_test.go:344: "task-pv-pod" [1efe51fe-9606-4c4c-b946-509f535b133c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1efe51fe-9606-4c4c-b946-509f535b133c] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.013011188s
addons_test.go:583: (dbg) Run:  kubectl --context addons-112000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-112000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-112000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-112000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-112000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-112000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e54e1c62-6cd1-4dc1-bb72-5a93343675c9] Pending
helpers_test.go:344: "task-pv-pod-restore" [e54e1c62-6cd1-4dc1-bb72-5a93343675c9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e54e1c62-6cd1-4dc1-bb72-5a93343675c9] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.017263678s
addons_test.go:625: (dbg) Run:  kubectl --context addons-112000 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-112000 delete pod task-pv-pod-restore: (1.188528086s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-112000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-112000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-amd64 -p addons-112000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.717298012s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (84.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-112000 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-112000 --alsologtostderr -v=1: (1.237832861s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-27jjv" [25e306e8-e4ba-4009-b73d-99262bccc125] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-27jjv" [25e306e8-e4ba-4009-b73d-99262bccc125] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.011890816s
--- PASS: TestAddons/parallel/Headlamp (13.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-ztt4g" [6fb76422-2fba-4c96-a1c5-8f07b21e9d6b] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006578704s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-112000
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-112000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-112000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a8536c71-3964-4ae1-bcb8-451a72348db1] Pending
helpers_test.go:344: "test-local-path" [a8536c71-3964-4ae1-bcb8-451a72348db1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a8536c71-3964-4ae1-bcb8-451a72348db1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a8536c71-3964-4ae1-bcb8-451a72348db1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007043755s
addons_test.go:890: (dbg) Run:  kubectl --context addons-112000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 ssh "cat /opt/local-path-provisioner/pvc-2eb2ccfb-3f53-4d96-96db-771a1bf606b9_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-112000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-112000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-amd64 -p addons-112000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-m4k8l" [f4f5a9ed-aba9-4ede-a5b3-6d1fffe8ed6a] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009041812s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-112000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.44s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-112000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-112000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.79s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-112000
addons_test.go:171: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-112000: (5.243300712s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-112000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-112000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-112000
--- PASS: TestAddons/StoppedEnableDisable (5.79s)

                                                
                                    
x
+
TestCertOptions (37.69s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-877000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E1025 19:19:09.580135   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:19:09.978154   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-877000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (33.983153429s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-877000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-877000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-877000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-877000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-877000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-877000: (3.362473666s)
--- PASS: TestCertOptions (37.69s)

                                                
                                    
x
+
TestCertExpiration (241.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-758000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E1025 19:18:51.605446   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-758000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (33.982188881s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-758000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-758000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (22.284477885s)
helpers_test.go:175: Cleaning up "cert-expiration-758000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-758000
E1025 19:22:46.520364   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-758000: (5.264999607s)
--- PASS: TestCertExpiration (241.53s)

                                                
                                    
x
+
TestDockerFlags (43.83s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-696000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-696000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (38.213876687s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-696000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-696000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-696000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-696000: (5.268611671s)
--- PASS: TestDockerFlags (43.83s)

                                                
                                    
x
+
TestForceSystemdFlag (38.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-872000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-872000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (33.291506859s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-872000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-872000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-872000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-872000: (5.314582944s)
--- PASS: TestForceSystemdFlag (38.79s)

                                                
                                    
x
+
TestForceSystemdEnv (39.82s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-737000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E1025 19:17:46.527136   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-737000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (34.386123244s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-737000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-737000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-737000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-737000: (5.266951115s)
--- PASS: TestForceSystemdEnv (39.82s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.54s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.54s)

                                                
                                    
x
+
TestErrorSpam/setup (35.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-066000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-066000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 --driver=hyperkit : (35.176306466s)
--- PASS: TestErrorSpam/setup (35.18s)

                                                
                                    
x
+
TestErrorSpam/start (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 start --dry-run
--- PASS: TestErrorSpam/start (1.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.5s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 status
--- PASS: TestErrorSpam/status (0.50s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (3.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 stop: (3.240010484s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-066000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-066000 stop
--- PASS: TestErrorSpam/stop (3.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/test/nested/copy/77290/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-441000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (51.867557246s)
--- PASS: TestFunctional/serial/StartWithProxy (51.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-441000 --alsologtostderr -v=8: (41.408058393s)
functional_test.go:659: soft start took 41.408511862s for "functional-441000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-441000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:3.1: (1.58949222s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:3.3: (1.491014373s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 cache add registry.k8s.io/pause:latest: (1.53477976s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local848193170/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache add minikube-local-cache-test:functional-441000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache delete minikube-local-cache-test:functional-441000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-441000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.57342ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 cache reload: (1.021924425s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 kubectl -- --context functional-441000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-441000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.80s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 18:53:51.728745   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:51.755346   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:51.767361   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:51.788023   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:51.828368   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:51.909368   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:52.071035   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:52.391338   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:53.031559   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:53:54.313592   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-441000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.641566908s)
functional_test.go:757: restart took 37.641724455s for "functional-441000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-441000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 logs
E1025 18:53:56.874003   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 logs: (3.29426092s)
--- PASS: TestFunctional/serial/LogsCmd (3.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2288605344/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2288605344/001/logs.txt: (3.186028444s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.98s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-441000 apply -f testdata/invalidsvc.yaml
E1025 18:54:01.996003   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-441000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-441000: exit status 115 (284.299181ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.85.77:31883 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-441000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-441000 delete -f testdata/invalidsvc.yaml: (1.496398004s)
--- PASS: TestFunctional/serial/InvalidService (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 config get cpus: exit status 14 (71.36818ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 config get cpus: exit status 14 (58.967446ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-441000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-441000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 78733: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-441000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (526.306393ms)

                                                
                                                
-- stdout --
	* [functional-441000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:06.751347   78680 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:55:06.751552   78680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:06.751557   78680 out.go:309] Setting ErrFile to fd 2...
	I1025 18:55:06.751562   78680 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:06.751744   78680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 18:55:06.753140   78680 out.go:303] Setting JSON to false
	I1025 18:55:06.775478   78680 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":35674,"bootTime":1698249632,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 18:55:06.775580   78680 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:06.798893   78680 out.go:177] * [functional-441000] minikube v1.31.2 on Darwin 14.0
	I1025 18:55:06.840847   78680 out.go:177]   - MINIKUBE_LOCATION=17491
	I1025 18:55:06.840875   78680 notify.go:220] Checking for updates...
	I1025 18:55:06.886794   78680 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 18:55:06.907703   78680 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:55:06.928650   78680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:06.949997   78680 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	I1025 18:55:06.971819   78680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:06.993339   78680 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:06.994017   78680 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:55:06.994104   78680 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 18:55:07.003259   78680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51867
	I1025 18:55:07.003626   78680 main.go:141] libmachine: () Calling .GetVersion
	I1025 18:55:07.004032   78680 main.go:141] libmachine: Using API Version  1
	I1025 18:55:07.004044   78680 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 18:55:07.004250   78680 main.go:141] libmachine: () Calling .GetMachineName
	I1025 18:55:07.004341   78680 main.go:141] libmachine: (functional-441000) Calling .DriverName
	I1025 18:55:07.004517   78680 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:55:07.004753   78680 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:55:07.004782   78680 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 18:55:07.013483   78680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51869
	I1025 18:55:07.013954   78680 main.go:141] libmachine: () Calling .GetVersion
	I1025 18:55:07.014442   78680 main.go:141] libmachine: Using API Version  1
	I1025 18:55:07.014461   78680 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 18:55:07.014820   78680 main.go:141] libmachine: () Calling .GetMachineName
	I1025 18:55:07.014972   78680 main.go:141] libmachine: (functional-441000) Calling .DriverName
	I1025 18:55:07.044916   78680 out.go:177] * Using the hyperkit driver based on existing profile
	I1025 18:55:07.086762   78680 start.go:298] selected driver: hyperkit
	I1025 18:55:07.086779   78680 start.go:902] validating driver "hyperkit" against &{Name:functional-441000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-441000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.85.77 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:07.086906   78680 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:07.111860   78680 out.go:177] 
	W1025 18:55:07.137703   78680 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 18:55:07.158463   78680 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-441000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-441000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (800.580549ms)

                                                
                                                
-- stdout --
	* [functional-441000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:07.179815   78690 out.go:296] Setting OutFile to fd 1 ...
	I1025 18:55:07.180199   78690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:07.180209   78690 out.go:309] Setting ErrFile to fd 2...
	I1025 18:55:07.180217   78690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:07.180571   78690 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 18:55:07.217113   78690 out.go:303] Setting JSON to false
	I1025 18:55:07.244358   78690 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":35675,"bootTime":1698249632,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W1025 18:55:07.244467   78690 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:07.265257   78690 out.go:177] * [functional-441000] minikube v1.31.2 sur Darwin 14.0
	I1025 18:55:07.344497   78690 out.go:177]   - MINIKUBE_LOCATION=17491
	I1025 18:55:07.307596   78690 notify.go:220] Checking for updates...
	I1025 18:55:07.386516   78690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	I1025 18:55:07.470381   78690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1025 18:55:07.512604   78690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:07.554496   78690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	I1025 18:55:07.596528   78690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:07.618017   78690 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 18:55:07.618368   78690 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:55:07.618416   78690 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 18:55:07.626761   78690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51875
	I1025 18:55:07.627123   78690 main.go:141] libmachine: () Calling .GetVersion
	I1025 18:55:07.627534   78690 main.go:141] libmachine: Using API Version  1
	I1025 18:55:07.627547   78690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 18:55:07.627761   78690 main.go:141] libmachine: () Calling .GetMachineName
	I1025 18:55:07.627865   78690 main.go:141] libmachine: (functional-441000) Calling .DriverName
	I1025 18:55:07.628037   78690 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 18:55:07.628269   78690 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 18:55:07.628291   78690 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 18:55:07.636044   78690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51877
	I1025 18:55:07.636365   78690 main.go:141] libmachine: () Calling .GetVersion
	I1025 18:55:07.636687   78690 main.go:141] libmachine: Using API Version  1
	I1025 18:55:07.636698   78690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 18:55:07.636900   78690 main.go:141] libmachine: () Calling .GetMachineName
	I1025 18:55:07.636989   78690 main.go:141] libmachine: (functional-441000) Calling .DriverName
	I1025 18:55:07.701527   78690 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1025 18:55:07.743829   78690 start.go:298] selected driver: hyperkit
	I1025 18:55:07.743846   78690 start.go:902] validating driver "hyperkit" against &{Name:functional-441000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-441000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.85.77 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 18:55:07.744052   78690 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:07.785426   78690 out.go:177] 
	W1025 18:55:07.822912   78690 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 18:55:07.885588   78690 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-441000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-441000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-htnqv" [40d21fa6-4d5f-424e-859a-8f68eb3acca8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-htnqv" [40d21fa6-4d5f-424e-859a-8f68eb3acca8] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.014701975s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.85.77:31993
functional_test.go:1674: http://192.168.85.77:31993: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-htnqv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.85.77:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.85.77:31993
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2765f017-b164-41b8-9a4a-a25413802223] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009044514s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-441000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-441000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-441000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-441000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5ae3cbe-58ef-4957-86fa-a058ec3dff8f] Pending
helpers_test.go:344: "sp-pod" [e5ae3cbe-58ef-4957-86fa-a058ec3dff8f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5ae3cbe-58ef-4957-86fa-a058ec3dff8f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.014092967s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-441000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-441000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-441000 delete -f testdata/storage-provisioner/pod.yaml: (1.012281313s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-441000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c486cf62-6035-4a89-9d83-4a0adae12bc0] Pending
helpers_test.go:344: "sp-pod" [c486cf62-6035-4a89-9d83-4a0adae12bc0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c486cf62-6035-4a89-9d83-4a0adae12bc0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009893275s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-441000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh -n functional-441000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 cp functional-441000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd4107776828/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh -n functional-441000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-441000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-rcw7f" [ae131d4a-4311-4607-a74d-719613d6ebcf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1025 18:54:12.236558   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-rcw7f" [ae131d4a-4311-4607-a74d-719613d6ebcf] Running
E1025 18:54:32.718352   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.01550016s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-441000 exec mysql-859648c796-rcw7f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-441000 exec mysql-859648c796-rcw7f -- mysql -ppassword -e "show databases;": exit status 1 (139.411123ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-441000 exec mysql-859648c796-rcw7f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-441000 exec mysql-859648c796-rcw7f -- mysql -ppassword -e "show databases;": exit status 1 (107.488211ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-441000 exec mysql-859648c796-rcw7f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.51s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/77290/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /etc/test/nested/copy/77290/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/77290.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /etc/ssl/certs/77290.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/77290.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /usr/share/ca-certificates/77290.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/772902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /etc/ssl/certs/772902.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/772902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /usr/share/ca-certificates/772902.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-441000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh "sudo systemctl is-active crio": exit status 1 (147.46998ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-441000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-441000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-441000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-441000 image ls --format short --alsologtostderr:
I1025 18:55:09.268039   78729 out.go:296] Setting OutFile to fd 1 ...
I1025 18:55:09.268381   78729 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.268386   78729 out.go:309] Setting ErrFile to fd 2...
I1025 18:55:09.268390   78729 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.268597   78729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 18:55:09.269252   78729 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.269349   78729 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.269772   78729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.269822   78729 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.277916   78729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51911
I1025 18:55:09.278412   78729 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.278937   78729 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.278967   78729 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.279298   78729 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.279450   78729 main.go:141] libmachine: (functional-441000) Calling .GetState
I1025 18:55:09.279559   78729 main.go:141] libmachine: (functional-441000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 18:55:09.279631   78729 main.go:141] libmachine: (functional-441000) DBG | hyperkit pid from json: 77875
I1025 18:55:09.281470   78729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.281498   78729 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.289881   78729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51913
I1025 18:55:09.290267   78729 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.290755   78729 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.290776   78729 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.291103   78729 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.291226   78729 main.go:141] libmachine: (functional-441000) Calling .DriverName
I1025 18:55:09.291401   78729 ssh_runner.go:195] Run: systemctl --version
I1025 18:55:09.291424   78729 main.go:141] libmachine: (functional-441000) Calling .GetSSHHostname
I1025 18:55:09.291536   78729 main.go:141] libmachine: (functional-441000) Calling .GetSSHPort
I1025 18:55:09.291647   78729 main.go:141] libmachine: (functional-441000) Calling .GetSSHKeyPath
I1025 18:55:09.291752   78729 main.go:141] libmachine: (functional-441000) Calling .GetSSHUsername
I1025 18:55:09.291856   78729 sshutil.go:53] new ssh client: &{IP:192.168.85.77 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/functional-441000/id_rsa Username:docker}
I1025 18:55:09.326207   78729 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 18:55:09.346860   78729 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.346868   78729 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.347038   78729 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.347048   78729 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:09.347054   78729 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.347055   78729 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.347067   78729 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.347197   78729 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.347231   78729 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.347256   78729 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-441000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-441000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-441000 | ab867c720427c | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| docker.io/library/mysql                     | 5.7               | 3b85be0b10d38 | 581MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | latest            | 593aee2afb642 | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | b135667c98980 | 47.7MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-441000 image ls --format table --alsologtostderr:
I1025 18:55:09.836363   78742 out.go:296] Setting OutFile to fd 1 ...
I1025 18:55:09.836757   78742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.836764   78742 out.go:309] Setting ErrFile to fd 2...
I1025 18:55:09.836770   78742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.837007   78742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 18:55:09.837635   78742 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.837732   78742 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.838115   78742 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.838164   78742 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.846518   78742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51947
I1025 18:55:09.847096   78742 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.847732   78742 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.847765   78742 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.848081   78742 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.848224   78742 main.go:141] libmachine: (functional-441000) Calling .GetState
I1025 18:55:09.848339   78742 main.go:141] libmachine: (functional-441000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 18:55:09.848441   78742 main.go:141] libmachine: (functional-441000) DBG | hyperkit pid from json: 77875
I1025 18:55:09.850109   78742 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.850133   78742 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.858041   78742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51949
I1025 18:55:09.858387   78742 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.858788   78742 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.858807   78742 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.859031   78742 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.859123   78742 main.go:141] libmachine: (functional-441000) Calling .DriverName
I1025 18:55:09.859309   78742 ssh_runner.go:195] Run: systemctl --version
I1025 18:55:09.859344   78742 main.go:141] libmachine: (functional-441000) Calling .GetSSHHostname
I1025 18:55:09.859438   78742 main.go:141] libmachine: (functional-441000) Calling .GetSSHPort
I1025 18:55:09.859565   78742 main.go:141] libmachine: (functional-441000) Calling .GetSSHKeyPath
I1025 18:55:09.859654   78742 main.go:141] libmachine: (functional-441000) Calling .GetSSHUsername
I1025 18:55:09.859758   78742 sshutil.go:53] new ssh client: &{IP:192.168.85.77 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/functional-441000/id_rsa Username:docker}
I1025 18:55:09.896085   78742 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 18:55:09.931634   78742 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.931648   78742 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.931806   78742 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.931806   78742 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.931815   78742 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:09.931823   78742 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.931829   78742 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.931970   78742 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.932001   78742 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.932014   78742 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-441000 image ls --format json --alsologtostderr:
[{"id":"ab867c720427c78321bbe5915a673179ebce03f1aebc967c1a0b495935fda7fa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-441000"],"size":"30"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d5
49eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-441000"],"size":"32900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[
],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47700000"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-441000 image ls --format json --alsologtostderr:
I1025 18:55:09.460896   78734 out.go:296] Setting OutFile to fd 1 ...
I1025 18:55:09.461141   78734 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.461146   78734 out.go:309] Setting ErrFile to fd 2...
I1025 18:55:09.461150   78734 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.461352   78734 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 18:55:09.462025   78734 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.462115   78734 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.462470   78734 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.462524   78734 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.470510   78734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51929
I1025 18:55:09.470912   78734 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.471343   78734 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.471357   78734 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.471574   78734 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.471681   78734 main.go:141] libmachine: (functional-441000) Calling .GetState
I1025 18:55:09.471777   78734 main.go:141] libmachine: (functional-441000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 18:55:09.471846   78734 main.go:141] libmachine: (functional-441000) DBG | hyperkit pid from json: 77875
I1025 18:55:09.473318   78734 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.473341   78734 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.481438   78734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
I1025 18:55:09.481815   78734 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.482168   78734 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.482178   78734 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.482453   78734 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.482558   78734 main.go:141] libmachine: (functional-441000) Calling .DriverName
I1025 18:55:09.482777   78734 ssh_runner.go:195] Run: systemctl --version
I1025 18:55:09.482801   78734 main.go:141] libmachine: (functional-441000) Calling .GetSSHHostname
I1025 18:55:09.482884   78734 main.go:141] libmachine: (functional-441000) Calling .GetSSHPort
I1025 18:55:09.482962   78734 main.go:141] libmachine: (functional-441000) Calling .GetSSHKeyPath
I1025 18:55:09.483037   78734 main.go:141] libmachine: (functional-441000) Calling .GetSSHUsername
I1025 18:55:09.483115   78734 sshutil.go:53] new ssh client: &{IP:192.168.85.77 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/functional-441000/id_rsa Username:docker}
I1025 18:55:09.516803   78734 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 18:55:09.576942   78734 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.576954   78734 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.577210   78734 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.577254   78734 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.577271   78734 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:09.577285   78734 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.577292   78734 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.577444   78734 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.577451   78734 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.577476   78734 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-441000 image ls --format yaml --alsologtostderr:
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47700000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ab867c720427c78321bbe5915a673179ebce03f1aebc967c1a0b495935fda7fa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-441000
size: "30"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-441000
size: "32900000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-441000 image ls --format yaml --alsologtostderr:
I1025 18:55:09.663527   78738 out.go:296] Setting OutFile to fd 1 ...
I1025 18:55:09.663841   78738 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.663847   78738 out.go:309] Setting ErrFile to fd 2...
I1025 18:55:09.663851   78738 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:09.664046   78738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 18:55:09.664696   78738 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.664791   78738 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:09.665199   78738 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.665240   78738 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.674327   78738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51940
I1025 18:55:09.674987   78738 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.675549   78738 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.675561   78738 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.675837   78738 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.675963   78738 main.go:141] libmachine: (functional-441000) Calling .GetState
I1025 18:55:09.676063   78738 main.go:141] libmachine: (functional-441000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 18:55:09.676137   78738 main.go:141] libmachine: (functional-441000) DBG | hyperkit pid from json: 77875
I1025 18:55:09.677724   78738 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:09.677754   78738 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:09.686566   78738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51943
I1025 18:55:09.687135   78738 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:09.687534   78738 main.go:141] libmachine: Using API Version  1
I1025 18:55:09.687550   78738 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:09.687844   78738 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:09.688015   78738 main.go:141] libmachine: (functional-441000) Calling .DriverName
I1025 18:55:09.688253   78738 ssh_runner.go:195] Run: systemctl --version
I1025 18:55:09.688277   78738 main.go:141] libmachine: (functional-441000) Calling .GetSSHHostname
I1025 18:55:09.688406   78738 main.go:141] libmachine: (functional-441000) Calling .GetSSHPort
I1025 18:55:09.688511   78738 main.go:141] libmachine: (functional-441000) Calling .GetSSHKeyPath
I1025 18:55:09.688670   78738 main.go:141] libmachine: (functional-441000) Calling .GetSSHUsername
I1025 18:55:09.688776   78738 sshutil.go:53] new ssh client: &{IP:192.168.85.77 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/functional-441000/id_rsa Username:docker}
I1025 18:55:09.725293   78738 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 18:55:09.750427   78738 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.750437   78738 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.750602   78738 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.750612   78738 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:09.750617   78738 main.go:141] libmachine: Making call to close driver server
I1025 18:55:09.750617   78738 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.750622   78738 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:09.750796   78738 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:09.750803   78738 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:09.750807   78738 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh pgrep buildkitd: exit status 1 (143.85332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image build -t localhost/my-image:functional-441000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image build -t localhost/my-image:functional-441000 testdata/build --alsologtostderr: (1.816216823s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-441000 image build -t localhost/my-image:functional-441000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c997d6e2676a
Removing intermediate container c997d6e2676a
---> e95762cd1b46
Step 3/3 : ADD content.txt /
---> 97886e2fc667
Successfully built 97886e2fc667
Successfully tagged localhost/my-image:functional-441000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-441000 image build -t localhost/my-image:functional-441000 testdata/build --alsologtostderr:
I1025 18:55:10.161509   78751 out.go:296] Setting OutFile to fd 1 ...
I1025 18:55:10.162080   78751 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:10.162087   78751 out.go:309] Setting ErrFile to fd 2...
I1025 18:55:10.162091   78751 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 18:55:10.162293   78751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 18:55:10.163002   78751 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:10.163675   78751 config.go:182] Loaded profile config "functional-441000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 18:55:10.164052   78751 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:10.164097   78751 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:10.172086   78751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
I1025 18:55:10.172508   78751 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:10.172979   78751 main.go:141] libmachine: Using API Version  1
I1025 18:55:10.172991   78751 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:10.173195   78751 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:10.173296   78751 main.go:141] libmachine: (functional-441000) Calling .GetState
I1025 18:55:10.173369   78751 main.go:141] libmachine: (functional-441000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 18:55:10.173435   78751 main.go:141] libmachine: (functional-441000) DBG | hyperkit pid from json: 77875
I1025 18:55:10.174863   78751 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 18:55:10.174886   78751 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 18:55:10.182595   78751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51963
I1025 18:55:10.182968   78751 main.go:141] libmachine: () Calling .GetVersion
I1025 18:55:10.183310   78751 main.go:141] libmachine: Using API Version  1
I1025 18:55:10.183328   78751 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 18:55:10.183542   78751 main.go:141] libmachine: () Calling .GetMachineName
I1025 18:55:10.183636   78751 main.go:141] libmachine: (functional-441000) Calling .DriverName
I1025 18:55:10.183787   78751 ssh_runner.go:195] Run: systemctl --version
I1025 18:55:10.183808   78751 main.go:141] libmachine: (functional-441000) Calling .GetSSHHostname
I1025 18:55:10.183883   78751 main.go:141] libmachine: (functional-441000) Calling .GetSSHPort
I1025 18:55:10.183950   78751 main.go:141] libmachine: (functional-441000) Calling .GetSSHKeyPath
I1025 18:55:10.184032   78751 main.go:141] libmachine: (functional-441000) Calling .GetSSHUsername
I1025 18:55:10.184115   78751 sshutil.go:53] new ssh client: &{IP:192.168.85.77 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/functional-441000/id_rsa Username:docker}
I1025 18:55:10.217198   78751 build_images.go:151] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3795220495.tar
I1025 18:55:10.217315   78751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 18:55:10.224922   78751 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3795220495.tar
I1025 18:55:10.228073   78751 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3795220495.tar: stat -c "%s %y" /var/lib/minikube/build/build.3795220495.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3795220495.tar': No such file or directory
I1025 18:55:10.228104   78751 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3795220495.tar --> /var/lib/minikube/build/build.3795220495.tar (3072 bytes)
I1025 18:55:10.245286   78751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3795220495
I1025 18:55:10.252381   78751 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3795220495 -xf /var/lib/minikube/build/build.3795220495.tar
I1025 18:55:10.259539   78751 docker.go:341] Building image: /var/lib/minikube/build/build.3795220495
I1025 18:55:10.259614   78751 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-441000 /var/lib/minikube/build/build.3795220495
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1025 18:55:11.871792   78751 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-441000 /var/lib/minikube/build/build.3795220495: (1.612113444s)
I1025 18:55:11.871857   78751 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3795220495
I1025 18:55:11.879061   78751 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3795220495.tar
I1025 18:55:11.885839   78751 build_images.go:207] Built localhost/my-image:functional-441000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3795220495.tar
I1025 18:55:11.885860   78751 build_images.go:123] succeeded building to: functional-441000
I1025 18:55:11.885864   78751 build_images.go:124] failed building to: 
I1025 18:55:11.885882   78751 main.go:141] libmachine: Making call to close driver server
I1025 18:55:11.885889   78751 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:11.886052   78751 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:11.886064   78751 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:11.886071   78751 main.go:141] libmachine: Making call to close driver server
I1025 18:55:11.886085   78751 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
I1025 18:55:11.886088   78751 main.go:141] libmachine: (functional-441000) Calling .Close
I1025 18:55:11.886225   78751 main.go:141] libmachine: Successfully made call to close driver server
I1025 18:55:11.886235   78751 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 18:55:11.886233   78751 main.go:141] libmachine: (functional-441000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
E1025 18:55:13.679885   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
2023/10/25 18:55:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.756561311s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-441000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-441000 docker-env) && out/minikube-darwin-amd64 status -p functional-441000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-441000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr: (3.270537944s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr: (2.070510173s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.185420079s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-441000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image load --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr: (3.125695116s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image save gcr.io/google-containers/addon-resizer:functional-441000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image save gcr.io/google-containers/addon-resizer:functional-441000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.282525742s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image rm gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.247965523s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-441000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 image save --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-441000 image save --daemon gcr.io/google-containers/addon-resizer:functional-441000 --alsologtostderr: (1.211791272s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-441000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-441000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-441000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8d684" [93cd183a-7ce9-45fe-b728-391cf82c77dc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8d684" [93cd183a-7ce9-45fe-b728-391cf82c77dc] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.011832698s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 78433: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-441000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [45cce184-05f2-4a2f-b69f-a647be4bfa1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [45cce184-05f2-4a2f-b69f-a647be4bfa1d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.014660751s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service list -o json
functional_test.go:1493: Took "371.486029ms" to run "out/minikube-darwin-amd64 -p functional-441000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.85.77:30473
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.85.77:30473
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-441000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.133.41 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-441000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "196.311497ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "80.41412ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "198.071402ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "82.863076ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3427769686/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698285297778753000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3427769686/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698285297778753000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3427769686/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698285297778753000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3427769686/001/test-1698285297778753000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (161.600105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 01:54 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 01:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 01:54 test-1698285297778753000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh cat /mount-9p/test-1698285297778753000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-441000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [04885655-a29e-4d48-b476-3bc4db939087] Pending
helpers_test.go:344: "busybox-mount" [04885655-a29e-4d48-b476-3bc4db939087] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [04885655-a29e-4d48-b476-3bc4db939087] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [04885655-a29e-4d48-b476-3bc4db939087] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.009254926s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-441000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3427769686/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port697465411/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (125.214544ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port697465411/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh "sudo umount -f /mount-9p": exit status 1 (126.65371ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-441000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port697465411/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T" /mount1: exit status 1 (159.895541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-441000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-441000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-441000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2759765385/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-441000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-441000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-441000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (35.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-517000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-517000 --driver=hyperkit : (35.826017596s)
--- PASS: TestImageBuild/serial/Setup (35.83s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-517000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-517000: (1.253257988s)
--- PASS: TestImageBuild/serial/NormalBuild (1.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-517000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-517000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-517000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-918000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E1025 18:56:35.602775   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-918000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m12.503452204s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons enable ingress --alsologtostderr -v=5: (18.301729094s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (41.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-918000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-918000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.000573558s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-918000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-918000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [52c470ea-882d-47a2-a06d-268d33f073e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [52c470ea-882d-47a2-a06d-268d33f073e6] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.013445808s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-918000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.85.79
addons_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons disable ingress-dns --alsologtostderr -v=1: (7.932961978s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-918000 addons disable ingress --alsologtostderr -v=1: (7.301517968s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (41.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-953000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1025 18:58:51.649250   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:59:10.021696   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.027668   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.038637   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.060537   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.101872   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.182649   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.344573   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:10.665116   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:11.306272   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:12.586894   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:15.147413   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 18:59:19.356036   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 18:59:20.268703   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-953000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (49.319679441s)
--- PASS: TestJSONOutput/start/Command (49.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-953000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-953000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-953000 --output=json --user=testUser
E1025 18:59:30.508705   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-953000 --output=json --user=testUser: (8.172750627s)
--- PASS: TestJSONOutput/stop/Command (8.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.84s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-822000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-822000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (441.496138ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7202e81-644f-4eb8-a16c-1adc071a2dc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-822000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eea92689-7243-4ab4-b620-a753666a3a7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17491"}}
	{"specversion":"1.0","id":"4b5c4c52-96a0-4a94-8ca2-5330b3610e82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig"}}
	{"specversion":"1.0","id":"c83301f2-edb2-4a97-b9fa-c2449a72c9b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"164560a6-1c0b-4646-a6e7-8f24e668e618","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7a4aa24b-e897-42bc-9b74-c09c2bd78624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube"}}
	{"specversion":"1.0","id":"0a59fc32-98d7-4d22-8f4c-af1f7fc90ebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1acf0c4a-f818-4b77-aa57-d25a19e0c51f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-822000
--- PASS: TestErrorJSONOutput (0.84s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (84.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-678000 --driver=hyperkit 
E1025 18:59:50.988521   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-678000 --driver=hyperkit : (38.56094511s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-680000 --driver=hyperkit 
E1025 19:00:31.948297   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-680000 --driver=hyperkit : (36.374485044s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-678000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-680000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-680000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-680000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-680000: (3.43678417s)
helpers_test.go:175: Cleaning up "first-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-678000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-678000: (5.263756562s)
--- PASS: TestMinikubeProfile (84.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-820000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-820000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.305760701s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-820000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-820000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-838000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-838000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.477207678s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-820000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-820000 --alsologtostderr -v=5: (2.376136033s)
--- PASS: TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-838000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-838000: (2.249832246s)
--- PASS: TestMountStart/serial/Stop (2.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-838000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-838000: (15.407029074s)
--- PASS: TestMountStart/serial/RestartStopped (16.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-838000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-338000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1025 19:02:46.564388   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.570698   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.581572   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.601668   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.641863   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.722872   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:46.883183   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:47.204901   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:47.845060   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:49.127180   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:51.687532   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:02:56.809296   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:03:07.049305   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:03:27.529237   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-338000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m35.699922535s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-338000 -- rollout status deployment/busybox: (2.765720489s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-k5rqc -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-q788v -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-k5rqc -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-q788v -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-k5rqc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-q788v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-k5rqc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-k5rqc -- sh -c "ping -c 1 192.168.85.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-q788v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-338000 -- exec busybox-5bc68d56bd-q788v -- sh -c "ping -c 1 192.168.85.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-338000 -v 3 --alsologtostderr
E1025 19:03:51.640503   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:04:08.490120   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-338000 -v 3 --alsologtostderr: (32.264827838s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.60s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp testdata/cp-test.txt multinode-338000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test.txt"
E1025 19:04:10.014398   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2246822355/001/cp-test_multinode-338000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000:/home/docker/cp-test.txt multinode-338000-m02:/home/docker/cp-test_multinode-338000_multinode-338000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test_multinode-338000_multinode-338000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000:/home/docker/cp-test.txt multinode-338000-m03:/home/docker/cp-test_multinode-338000_multinode-338000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test_multinode-338000_multinode-338000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp testdata/cp-test.txt multinode-338000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2246822355/001/cp-test_multinode-338000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m02:/home/docker/cp-test.txt multinode-338000:/home/docker/cp-test_multinode-338000-m02_multinode-338000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test_multinode-338000-m02_multinode-338000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m02:/home/docker/cp-test.txt multinode-338000-m03:/home/docker/cp-test_multinode-338000-m02_multinode-338000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test_multinode-338000-m02_multinode-338000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp testdata/cp-test.txt multinode-338000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2246822355/001/cp-test_multinode-338000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m03:/home/docker/cp-test.txt multinode-338000:/home/docker/cp-test_multinode-338000-m03_multinode-338000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000 "sudo cat /home/docker/cp-test_multinode-338000-m03_multinode-338000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 cp multinode-338000-m03:/home/docker/cp-test.txt multinode-338000-m02:/home/docker/cp-test_multinode-338000-m03_multinode-338000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 ssh -n multinode-338000-m02 "sudo cat /home/docker/cp-test_multinode-338000-m03_multinode-338000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-338000 node stop m03: (2.193346999s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-338000 status: exit status 7 (271.659397ms)

                                                
                                                
-- stdout --
	multinode-338000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-338000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-338000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr: exit status 7 (279.151628ms)

                                                
                                                
-- stdout --
	multinode-338000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-338000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-338000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 19:04:17.554807   79735 out.go:296] Setting OutFile to fd 1 ...
	I1025 19:04:17.555100   79735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:04:17.555106   79735 out.go:309] Setting ErrFile to fd 2...
	I1025 19:04:17.555110   79735 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:04:17.555295   79735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 19:04:17.555482   79735 out.go:303] Setting JSON to false
	I1025 19:04:17.555512   79735 mustload.go:65] Loading cluster: multinode-338000
	I1025 19:04:17.555571   79735 notify.go:220] Checking for updates...
	I1025 19:04:17.555867   79735 config.go:182] Loaded profile config "multinode-338000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 19:04:17.555881   79735 status.go:255] checking status of multinode-338000 ...
	I1025 19:04:17.556285   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.556369   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.565467   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52921
	I1025 19:04:17.565803   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.566214   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.566226   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.566461   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.566576   79735 main.go:141] libmachine: (multinode-338000) Calling .GetState
	I1025 19:04:17.566662   79735 main.go:141] libmachine: (multinode-338000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:04:17.566733   79735 main.go:141] libmachine: (multinode-338000) DBG | hyperkit pid from json: 79412
	I1025 19:04:17.568060   79735 status.go:330] multinode-338000 host status = "Running" (err=<nil>)
	I1025 19:04:17.568081   79735 host.go:66] Checking if "multinode-338000" exists ...
	I1025 19:04:17.568313   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.568335   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.576223   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52923
	I1025 19:04:17.576571   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.576906   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.576917   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.577134   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.583338   79735 main.go:141] libmachine: (multinode-338000) Calling .GetIP
	I1025 19:04:17.583456   79735 host.go:66] Checking if "multinode-338000" exists ...
	I1025 19:04:17.583687   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.583717   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.591634   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52925
	I1025 19:04:17.591976   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.592459   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.592490   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.592846   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.592983   79735 main.go:141] libmachine: (multinode-338000) Calling .DriverName
	I1025 19:04:17.593142   79735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 19:04:17.593165   79735 main.go:141] libmachine: (multinode-338000) Calling .GetSSHHostname
	I1025 19:04:17.593240   79735 main.go:141] libmachine: (multinode-338000) Calling .GetSSHPort
	I1025 19:04:17.593343   79735 main.go:141] libmachine: (multinode-338000) Calling .GetSSHKeyPath
	I1025 19:04:17.593427   79735 main.go:141] libmachine: (multinode-338000) Calling .GetSSHUsername
	I1025 19:04:17.593510   79735 sshutil.go:53] new ssh client: &{IP:192.168.85.85 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/multinode-338000/id_rsa Username:docker}
	I1025 19:04:17.639272   79735 ssh_runner.go:195] Run: systemctl --version
	I1025 19:04:17.643032   79735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 19:04:17.652691   79735 kubeconfig.go:92] found "multinode-338000" server: "https://192.168.85.85:8443"
	I1025 19:04:17.652711   79735 api_server.go:166] Checking apiserver status ...
	I1025 19:04:17.652748   79735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 19:04:17.661741   79735 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1931/cgroup
	I1025 19:04:17.668046   79735 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/pod4e21e180d748e8f81bc508974a4c8abc/2e7039321fdff216959c9007e03f499838ab37fd1b2c3ead2e2c26666dc2b75f"
	I1025 19:04:17.668099   79735 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4e21e180d748e8f81bc508974a4c8abc/2e7039321fdff216959c9007e03f499838ab37fd1b2c3ead2e2c26666dc2b75f/freezer.state
	I1025 19:04:17.674101   79735 api_server.go:204] freezer state: "THAWED"
	I1025 19:04:17.674113   79735 api_server.go:253] Checking apiserver healthz at https://192.168.85.85:8443/healthz ...
	I1025 19:04:17.677448   79735 api_server.go:279] https://192.168.85.85:8443/healthz returned 200:
	ok
	I1025 19:04:17.677460   79735 status.go:421] multinode-338000 apiserver status = Running (err=<nil>)
	I1025 19:04:17.677468   79735 status.go:257] multinode-338000 status: &{Name:multinode-338000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 19:04:17.677478   79735 status.go:255] checking status of multinode-338000-m02 ...
	I1025 19:04:17.677703   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.677722   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.685652   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52929
	I1025 19:04:17.686020   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.686412   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.686462   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.686736   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.686842   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetState
	I1025 19:04:17.686933   79735 main.go:141] libmachine: (multinode-338000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:04:17.687011   79735 main.go:141] libmachine: (multinode-338000-m02) DBG | hyperkit pid from json: 79446
	I1025 19:04:17.688388   79735 status.go:330] multinode-338000-m02 host status = "Running" (err=<nil>)
	I1025 19:04:17.688397   79735 host.go:66] Checking if "multinode-338000-m02" exists ...
	I1025 19:04:17.688630   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.688653   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.696686   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52931
	I1025 19:04:17.697030   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.697419   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.697432   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.697661   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.697781   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetIP
	I1025 19:04:17.697859   79735 host.go:66] Checking if "multinode-338000-m02" exists ...
	I1025 19:04:17.698107   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.698135   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.706238   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52933
	I1025 19:04:17.706588   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.706971   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.706986   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.707212   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.707350   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .DriverName
	I1025 19:04:17.707474   79735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 19:04:17.707485   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetSSHHostname
	I1025 19:04:17.707575   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetSSHPort
	I1025 19:04:17.707657   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetSSHKeyPath
	I1025 19:04:17.707739   79735 main.go:141] libmachine: (multinode-338000-m02) Calling .GetSSHUsername
	I1025 19:04:17.707833   79735 sshutil.go:53] new ssh client: &{IP:192.168.85.86 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/multinode-338000-m02/id_rsa Username:docker}
	I1025 19:04:17.755034   79735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 19:04:17.763970   79735 status.go:257] multinode-338000-m02 status: &{Name:multinode-338000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 19:04:17.763998   79735 status.go:255] checking status of multinode-338000-m03 ...
	I1025 19:04:17.764252   79735 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:04:17.764277   79735 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:04:17.772362   79735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52936
	I1025 19:04:17.772742   79735 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:04:17.773074   79735 main.go:141] libmachine: Using API Version  1
	I1025 19:04:17.773088   79735 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:04:17.773292   79735 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:04:17.773387   79735 main.go:141] libmachine: (multinode-338000-m03) Calling .GetState
	I1025 19:04:17.773469   79735 main.go:141] libmachine: (multinode-338000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:04:17.773536   79735 main.go:141] libmachine: (multinode-338000-m03) DBG | hyperkit pid from json: 79523
	I1025 19:04:17.774856   79735 main.go:141] libmachine: (multinode-338000-m03) DBG | hyperkit pid 79523 missing from process table
	I1025 19:04:17.774910   79735 status.go:330] multinode-338000-m03 host status = "Stopped" (err=<nil>)
	I1025 19:04:17.774922   79735 status.go:343] host is not running, skipping remaining checks
	I1025 19:04:17.774927   79735 status.go:257] multinode-338000-m03 status: &{Name:multinode-338000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 node start m03 --alsologtostderr
E1025 19:04:37.703612   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-338000 node start m03 --alsologtostderr: (26.849212573s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-338000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-338000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-338000: (18.478793547s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-338000 --wait=true -v=8 --alsologtostderr
E1025 19:05:30.410030   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-338000 --wait=true -v=8 --alsologtostderr: (2m36.27004514s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-338000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-338000 node delete m03: (2.664036635s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 stop
E1025 19:07:46.557267   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-338000 stop: (16.332740007s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-338000 status: exit status 7 (78.795552ms)

                                                
                                                
-- stdout --
	multinode-338000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-338000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr: exit status 7 (80.468434ms)

                                                
                                                
-- stdout --
	multinode-338000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-338000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 19:07:59.336758   79929 out.go:296] Setting OutFile to fd 1 ...
	I1025 19:07:59.337000   79929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:07:59.337005   79929 out.go:309] Setting ErrFile to fd 2...
	I1025 19:07:59.337009   79929 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 19:07:59.337185   79929 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
	I1025 19:07:59.337362   79929 out.go:303] Setting JSON to false
	I1025 19:07:59.337381   79929 mustload.go:65] Loading cluster: multinode-338000
	I1025 19:07:59.337428   79929 notify.go:220] Checking for updates...
	I1025 19:07:59.337705   79929 config.go:182] Loaded profile config "multinode-338000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 19:07:59.337718   79929 status.go:255] checking status of multinode-338000 ...
	I1025 19:07:59.338072   79929 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:07:59.338140   79929 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:07:59.346234   79929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53118
	I1025 19:07:59.346561   79929 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:07:59.346994   79929 main.go:141] libmachine: Using API Version  1
	I1025 19:07:59.347005   79929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:07:59.347249   79929 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:07:59.347360   79929 main.go:141] libmachine: (multinode-338000) Calling .GetState
	I1025 19:07:59.347453   79929 main.go:141] libmachine: (multinode-338000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:07:59.347499   79929 main.go:141] libmachine: (multinode-338000) DBG | hyperkit pid from json: 79802
	I1025 19:07:59.348536   79929 main.go:141] libmachine: (multinode-338000) DBG | hyperkit pid 79802 missing from process table
	I1025 19:07:59.348575   79929 status.go:330] multinode-338000 host status = "Stopped" (err=<nil>)
	I1025 19:07:59.348581   79929 status.go:343] host is not running, skipping remaining checks
	I1025 19:07:59.348586   79929 status.go:257] multinode-338000 status: &{Name:multinode-338000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 19:07:59.348610   79929 status.go:255] checking status of multinode-338000-m02 ...
	I1025 19:07:59.348836   79929 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1025 19:07:59.348858   79929 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1025 19:07:59.356585   79929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53120
	I1025 19:07:59.356898   79929 main.go:141] libmachine: () Calling .GetVersion
	I1025 19:07:59.357303   79929 main.go:141] libmachine: Using API Version  1
	I1025 19:07:59.357325   79929 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 19:07:59.357530   79929 main.go:141] libmachine: () Calling .GetMachineName
	I1025 19:07:59.357622   79929 main.go:141] libmachine: (multinode-338000-m02) Calling .GetState
	I1025 19:07:59.357709   79929 main.go:141] libmachine: (multinode-338000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1025 19:07:59.357762   79929 main.go:141] libmachine: (multinode-338000-m02) DBG | hyperkit pid from json: 79839
	I1025 19:07:59.358769   79929 main.go:141] libmachine: (multinode-338000-m02) DBG | hyperkit pid 79839 missing from process table
	I1025 19:07:59.358808   79929 status.go:330] multinode-338000-m02 host status = "Stopped" (err=<nil>)
	I1025 19:07:59.358817   79929 status.go:343] host is not running, skipping remaining checks
	I1025 19:07:59.358823   79929 status.go:257] multinode-338000-m02 status: &{Name:multinode-338000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (111.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-338000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1025 19:08:14.248380   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:08:51.634183   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:09:10.006077   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-338000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m51.637294336s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-338000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (111.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-338000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-338000-m02 --driver=hyperkit 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-338000-m02 --driver=hyperkit : exit status 14 (539.533426ms)

                                                
                                                
-- stdout --
	* [multinode-338000-m02] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-338000-m02' is duplicated with machine name 'multinode-338000-m02' in profile 'multinode-338000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-338000-m03 --driver=hyperkit 
E1025 19:10:14.700143   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-338000-m03 --driver=hyperkit : (34.668493173s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-338000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-338000: exit status 80 (288.839511ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-338000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-338000-m03 already exists in multinode-338000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-338000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-338000-m03: (10.272228611s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.83s)

                                                
                                    
x
+
TestPreload (174.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-255000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-255000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m48.411730746s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-255000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-255000 image pull gcr.io/k8s-minikube/busybox: (1.304614206s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-255000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-255000: (8.244898906s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-255000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1025 19:12:46.548793   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-255000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (50.896495805s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-255000 image list
helpers_test.go:175: Cleaning up "test-preload-255000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-255000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-255000: (5.268760181s)
--- PASS: TestPreload (174.29s)

                                                
                                    
x
+
TestScheduledStopUnix (105.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-526000 --memory=2048 --driver=hyperkit 
E1025 19:13:51.617531   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:14:09.986271   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-526000 --memory=2048 --driver=hyperkit : (33.605712371s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-526000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-526000 -n scheduled-stop-526000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-526000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-526000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-526000 -n scheduled-stop-526000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-526000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-526000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-526000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-526000: exit status 7 (75.518098ms)

                                                
                                                
-- stdout --
	scheduled-stop-526000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-526000 -n scheduled-stop-526000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-526000 -n scheduled-stop-526000: exit status 7 (67.507628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-526000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-526000
--- PASS: TestScheduledStopUnix (105.09s)

                                                
                                    
x
+
TestSkaffold (109.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2084370911 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-579000 --memory=2600 --driver=hyperkit 
E1025 19:15:33.035245   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-579000 --memory=2600 --driver=hyperkit : (34.926088311s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2084370911 run --minikube-profile skaffold-579000 --kube-context skaffold-579000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2084370911 run --minikube-profile skaffold-579000 --kube-context skaffold-579000 --status-check=true --port-forward=false --interactive=false: (57.232825423s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-866998d9dd-7z9b9" [ec03d70f-9e77-48e4-869a-1a069f415dc3] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011322191s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-87d74ff58-dv922" [92f59a32-fefd-4afd-95a6-4e106112f605] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00720914s
helpers_test.go:175: Cleaning up "skaffold-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-579000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-579000: (5.286623148s)
--- PASS: TestSkaffold (109.67s)

                                                
                                    
x
+
TestKubernetesUpgrade (149.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E1025 19:22:01.272240   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.278712   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.289003   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.377314   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.417917   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.499834   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.660100   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:01.980208   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:02.621592   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:03.902925   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:06.463875   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:11.584566   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:22:21.825788   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m12.171129402s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-098000
E1025 19:22:42.306504   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-098000: (8.300023358s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-098000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-098000 status --format={{.Host}}: exit status 7 (67.477838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=hyperkit : (33.15921517s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-098000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (542.496087ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-098000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-098000
	    minikube start -p kubernetes-upgrade-098000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0980002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-098000 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=hyperkit 
E1025 19:23:23.266363   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:23:51.596826   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-098000 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=hyperkit : (31.868979311s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-098000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-098000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-098000: (3.487883736s)
--- PASS: TestKubernetesUpgrade (149.70s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.25s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17491
- KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1593134191/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1593134191/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1593134191/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1593134191/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.25s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.17s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17491
- KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3285339356/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3285339356/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3285339356/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3285339356/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (165.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3943793149.exe start -p stopped-upgrade-131000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:196: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3943793149.exe start -p stopped-upgrade-131000 --memory=2200 --vm-driver=hyperkit : (1m24.685581563s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3943793149.exe -p stopped-upgrade-131000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.3943793149.exe -p stopped-upgrade-131000 stop: (8.08813504s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-131000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1025 19:24:45.185108   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-131000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m12.23675263s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (165.01s)

                                                
                                    
x
+
TestPause/serial/Start (49.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-439000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E1025 19:24:09.969655   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-439000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (49.866486197s)
--- PASS: TestPause/serial/Start (49.87s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-439000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-439000 --alsologtostderr -v=1 --driver=hyperkit : (39.166103872s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-439000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-439000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-439000 --output=json --layout=cluster: exit status 2 (161.731936ms)

                                                
                                                
-- stdout --
	{"Name":"pause-439000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-439000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-439000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.50s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-439000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-439000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-439000 --alsologtostderr -v=5: (5.27786107s)
--- PASS: TestPause/serial/DeletePaused (5.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (735.880027ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-940000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17491
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-940000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-940000 --driver=hyperkit : (38.899385287s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-940000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-131000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-131000: (2.510563851s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (57.703139067s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --driver=hyperkit : (13.876914571s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-940000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-940000 status -o json: exit status 2 (146.577766ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-940000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-940000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-940000: (2.39053534s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-940000 --no-kubernetes --driver=hyperkit : (15.691588971s)
--- PASS: TestNoKubernetes/serial/Start (15.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6xnwx" [bac5d715-d365-4e4f-aee5-5c7c8b9391f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6xnwx" [bac5d715-d365-4e4f-aee5-5c7c8b9391f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.018731746s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-940000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-940000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (128.530956ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-940000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-940000: (2.263077604s)
--- PASS: TestNoKubernetes/serial/Stop (2.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-940000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-940000 --driver=hyperkit : (15.067153846s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-940000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-940000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (131.423722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (58.876984966s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E1025 19:27:29.022036   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:27:46.512617   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m17.565805377s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gph8n" [559155e2-b069-4d2b-a213-a02f9bb0daf4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012624575s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x8vsb" [9e73351f-7785-4d7e-9771-6fc7e7dcbc8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x8vsb" [9e73351f-7785-4d7e-9771-6fc7e7dcbc8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.008324358s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gjxmj" [f60a1bae-00b9-485a-8af7-8996389faeee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013980801s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7xqr8" [5338a97b-d782-43e0-8acb-b94cc8808c39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7xqr8" [5338a97b-d782-43e0-8acb-b94cc8808c39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.007889474s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (59.068919853s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (50.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E1025 19:29:10.120152   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (50.473046406s)
--- PASS: TestNetworkPlugins/group/false/Start (50.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fz42x" [69e3cd46-f279-4dc4-8f57-e1b5116b989e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fz42x" [69e3cd46-f279-4dc4-8f57-e1b5116b989e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006183204s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (17.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dxdqb" [32cce2f9-354e-4d3f-8874-563f2b83b294] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dxdqb" [32cce2f9-354e-4d3f-8874-563f2b83b294] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 17.007218603s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (17.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (49.265528559s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (58.748358048s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x4nmv" [b98b87d6-4bf6-4bf1-8ae5-717398586b15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x4nmv" [b98b87d6-4bf6-4bf1-8ae5-717398586b15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.0089657s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (51.99172705s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-85bd2" [19aa697b-4afa-478b-8eef-f7fc0392b5c0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.012765226s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2w4sh" [e801ba8e-4592-4bbf-ab08-a82314d3a7c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 19:31:42.279716   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.284887   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.295261   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.316668   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.358462   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.438626   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.598741   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:42.919020   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:43.560220   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2w4sh" [e801ba8e-4592-4bbf-ab08-a82314d3a7c3] Running
E1025 19:31:44.840460   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:31:47.401168   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.006946087s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (87.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E1025 19:32:13.177387   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 19:32:23.245197   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-182000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m27.993910184s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (87.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-97kl7" [53339579-5e89-4a61-911e-77edc1787d2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-97kl7" [53339579-5e89-4a61-911e-77edc1787d2e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008494239s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1025 19:33:04.206553   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:33:06.009750   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.016103   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.026246   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.048340   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.088526   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.169458   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.330350   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:06.650705   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:07.291612   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:08.573211   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:11.133503   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:16.253786   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:26.494624   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:29.389961   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.395959   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.407847   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.429117   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.470845   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.551475   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:29.712991   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:30.033836   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:30.675639   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:31.956157   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:33:34.516755   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m9.823460566s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-182000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-182000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-snq4c" [a3c97f7a-86dc-4cd8-9ee1-8ef62d86d1a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 19:33:39.638454   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-snq4c" [a3c97f7a-86dc-4cd8-9ee1-8ef62d86d1a8] Running
E1025 19:33:47.003041   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:33:49.879057   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.008770652s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-182000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-182000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E1025 19:49:10.154473   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 19:49:29.113698   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:34:10.129301   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 19:34:10.359973   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:34:26.129107   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:34:27.965769   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:34:42.797159   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:42.802435   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:42.813402   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:42.834479   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:42.875983   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:42.956821   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:43.117800   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:43.439248   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:44.079532   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:45.360341   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:47.920536   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:51.321436   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:34:53.040799   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:34:59.228965   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.234377   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.245140   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.267053   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.307468   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.389453   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.550973   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:34:59.871208   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:35:00.511547   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:35:01.793123   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:35:03.281330   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.3: (56.496012131s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-159000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77ffabf5-7621-418a-b256-79f51d940342] Pending
helpers_test.go:344: "busybox" [77ffabf5-7621-418a-b256-79f51d940342] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 19:35:04.353673   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [77ffabf5-7621-418a-b256-79f51d940342] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.018687098s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-159000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-080000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f37a891-bc99-446c-bd65-528ddc0f2a24] Pending
helpers_test.go:344: "busybox" [9f37a891-bc99-446c-bd65-528ddc0f2a24] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f37a891-bc99-446c-bd65-528ddc0f2a24] Running
E1025 19:35:09.474574   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.01611777s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-080000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-159000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-159000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-159000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-159000 --alsologtostderr -v=3: (8.294423949s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-080000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-080000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-080000 --alsologtostderr -v=3
E1025 19:35:19.715517   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-080000 --alsologtostderr -v=3: (8.264784025s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 7 (69.093661ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-159000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (466.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1025 19:35:23.763171   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m46.594801227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (466.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-080000 -n no-preload-080000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-080000 -n no-preload-080000: exit status 7 (67.62544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-080000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (307.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:35:40.197942   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:35:49.733852   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:35:49.888403   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:36:01.813267   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:01.819795   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:01.829914   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:01.850380   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:01.890809   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:01.971036   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:02.131174   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:02.451406   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:03.092361   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:04.373505   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:04.725376   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:36:06.934380   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:12.054852   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:13.245605   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:36:21.161016   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:36:22.296559   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:33.071529   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.077032   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.088332   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.109873   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.150143   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.231255   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.391400   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:33.712563   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:34.353060   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:35.634694   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:38.196329   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:42.289724   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:36:42.778353   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:36:43.317841   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:36:53.559498   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:37:01.430842   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:37:09.974105   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:37:14.040487   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:37:23.739895   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:37:24.947508   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:24.952662   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:24.963144   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:24.984083   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:25.025553   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:25.105815   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:25.266890   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:25.587276   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:26.227604   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:26.649980   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:37:27.509285   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:30.071275   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:35.191869   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:43.084573   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:37:45.432419   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:37:46.682851   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:37:55.002806   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:38:05.913254   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:38:06.018428   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:38:24.555036   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
E1025 19:38:29.398064   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:38:33.733484   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
E1025 19:38:37.375344   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.380433   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.390989   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.411197   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.451938   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.532655   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:37.694802   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:38.016110   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:38.657150   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:39.938841   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:42.500563   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:45.663486   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:38:46.875415   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:38:47.621163   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:38:51.764363   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:38:57.091850   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:38:57.862828   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:39:10.137208   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
E1025 19:39:16.926571   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:39:18.344340   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:39:42.805191   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:39:59.236624   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:39:59.307841   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:40:08.799766   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:40:10.495364   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:40:26.930137   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-080000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.3: (5m7.201708437s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-080000 -n no-preload-080000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (307.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xsdwz" [2526708f-0ff8-4f60-959f-50d08500e6c7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012542863s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xsdwz" [2526708f-0ff8-4f60-959f-50d08500e6c7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008240238s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-080000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-080000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-080000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-080000 -n no-preload-080000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-080000 -n no-preload-080000: exit status 2 (157.163626ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-080000 -n no-preload-080000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-080000 -n no-preload-080000: exit status 2 (158.906331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-080000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-080000 -n no-preload-080000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-080000 -n no-preload-080000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-195000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:41:01.821051   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:41:21.230557   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:41:29.508656   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:41:33.079285   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:41:42.297233   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-195000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.3: (50.700872388s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-195000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5573f67b-3af3-4555-a533-7006e6ace6e9] Pending
helpers_test.go:344: "busybox" [5573f67b-3af3-4555-a533-7006e6ace6e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5573f67b-3af3-4555-a533-7006e6ace6e9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.018163305s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-195000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-195000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-195000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-195000 --alsologtostderr -v=3
E1025 19:42:00.773473   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:42:01.441064   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-195000 --alsologtostderr -v=3: (8.303307204s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-195000 -n embed-certs-195000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-195000 -n embed-certs-195000: exit status 7 (67.894705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-195000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-195000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:42:24.957541   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:42:46.691339   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:42:52.644963   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:43:06.027020   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-195000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.3: (4m59.249397967s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-195000 -n embed-certs-195000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bxbts" [ec39e599-10b1-499e-96c4-6fc6a8db22f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010451909s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bxbts" [ec39e599-10b1-499e-96c4-6fc6a8db22f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00645419s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-159000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-159000 -n old-k8s-version-159000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 2 (153.838377ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-159000 -n old-k8s-version-159000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 2 (156.350635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-159000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-159000 -n old-k8s-version-159000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-159000 -n old-k8s-version-159000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-895000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:43:34.843511   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:43:37.383893   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:43:51.772463   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:44:05.076928   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:44:10.147105   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-895000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.3: (49.200793976s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-895000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b85c0ec3-3b61-44b9-86cb-1138cb8add82] Pending
helpers_test.go:344: "busybox" [b85c0ec3-3b61-44b9-86cb-1138cb8add82] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b85c0ec3-3b61-44b9-86cb-1138cb8add82] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.017627524s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-895000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-895000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-895000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-895000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-895000 --alsologtostderr -v=3: (8.269250016s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000: exit status 7 (69.478639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-895000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-895000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:44:42.814493   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
E1025 19:44:59.245228   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/false-182000/client.crt: no such file or directory
E1025 19:45:03.495196   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.501157   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.511331   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.531654   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.572151   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.653170   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:03.815018   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:04.135619   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:04.777412   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:05.661509   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.667392   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.677977   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.698401   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.738863   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.820755   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:05.981188   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:06.057750   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:06.302953   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:06.943638   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:08.224430   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:08.618037   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:10.785828   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:13.738724   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:15.907402   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:23.980080   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:26.148202   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:45:44.461640   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:45:46.630643   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:46:01.828577   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/enable-default-cni-182000/client.crt: no such file or directory
E1025 19:46:25.423022   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:46:27.592906   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:46:33.089004   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/flannel-182000/client.crt: no such file or directory
E1025 19:46:42.306437   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:47:01.449414   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/skaffold-579000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-895000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.3: (4m57.351782598s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (297.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bzbjd" [1c2a3236-528e-40c9-9029-bf3467c6dc94] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012775667s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-bzbjd" [1c2a3236-528e-40c9-9029-bf3467c6dc94] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00800999s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-195000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-195000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-195000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-195000 -n embed-certs-195000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-195000 -n embed-certs-195000: exit status 2 (158.943656ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-195000 -n embed-certs-195000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-195000 -n embed-certs-195000: exit status 2 (157.967891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-195000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-195000 -n embed-certs-195000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-195000 -n embed-certs-195000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:47:24.965055   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/bridge-182000/client.crt: no such file or directory
E1025 19:47:46.699284   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/ingress-addon-legacy-918000/client.crt: no such file or directory
E1025 19:47:47.345977   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E1025 19:47:49.515545   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/no-preload-080000/client.crt: no such file or directory
E1025 19:48:05.353723   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/auto-182000/client.crt: no such file or directory
E1025 19:48:06.035857   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kindnet-182000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.3: (46.989786237s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-205000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-205000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-205000 --alsologtostderr -v=3: (8.250095515s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000: exit status 7 (68.355093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-205000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.3
E1025 19:48:29.415767   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
E1025 19:48:37.393444   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/kubenet-182000/client.crt: no such file or directory
E1025 19:48:51.780540   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/addons-112000/client.crt: no such file or directory
E1025 19:48:53.208214   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/functional-441000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-205000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.3: (37.469733142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-205000 -n newest-cni-205000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-205000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-205000 -n newest-cni-205000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-205000 -n newest-cni-205000: exit status 2 (169.167574ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-205000 -n newest-cni-205000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-205000 -n newest-cni-205000: exit status 2 (170.812549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-205000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-205000 -n newest-cni-205000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-205000 -n newest-cni-205000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wsv8x" [9bbe51ad-047e-4f9f-911b-bf44e06907fe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014501191s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wsv8x" [9bbe51ad-047e-4f9f-911b-bf44e06907fe] Running
E1025 19:49:42.822301   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/custom-flannel-182000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007745125s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-895000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-895000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-895000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000: exit status 2 (159.135441ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000: exit status 2 (159.153497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-895000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-895000 -n default-k8s-diff-port-895000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.86s)

                                                
                                    

Test skip (20/322)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-182000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-182000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-182000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-182000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182000"

                                                
                                                
----------------------- debugLogs end: cilium-182000 [took: 5.453525263s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-182000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-182000
--- SKIP: TestNetworkPlugins/group/cilium (5.84s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-186000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-186000
E1025 19:43:29.406285   77290 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/calico-182000/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.40s)

                                                
                                    
Copied to clipboard