Test Report: Hyperkit_macOS 17340

                    
                      49babfe4fcdff3bcc398a25366bae00d3ae6dc66:2023-10-02:31256
                    
                

Test fail (5/309)

Order failed test Duration
152 TestImageBuild/serial/Setup 16.1
190 TestMinikubeProfile 21.89
244 TestStoppedBinaryUpgrade/Upgrade 116.86
271 TestNetworkPlugins/group/auto/Start 15.28
351 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 3
x
+
TestImageBuild/serial/Setup (16.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-387000 --driver=hyperkit 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-387000 --driver=hyperkit : exit status 90 (15.969427818s)

                                                
                                                
-- stdout --
	* [image-387000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node image-387000 in cluster image-387000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-387000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p image-387000 -n image-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p image-387000 -n image-387000: exit status 6 (133.938131ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 03:49:36.797461   11786 status.go:415] kubeconfig endpoint: extract IP: "image-387000" does not appear in /Users/jenkins/minikube-integration/17340-9782/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-387000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (16.10s)

                                                
                                    
x
+
TestMinikubeProfile (21.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-100000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p first-100000 --driver=hyperkit : exit status 90 (15.966558144s)

                                                
                                                
-- stdout --
	* [first-100000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node first-100000 in cluster first-100000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-amd64 start -p first-100000 --driver=hyperkit ": exit status 90
panic.go:523: *** TestMinikubeProfile FAILED at 2023-10-02 03:53:41.462689 -0700 PDT m=+790.079036344
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p second-102000 -n second-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p second-102000 -n second-102000: exit status 85 (128.516085ms)

                                                
                                                
-- stdout --
	* Profile "second-102000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-102000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-102000" host is not running, skipping log retrieval (state="* Profile \"second-102000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-102000\"")
helpers_test.go:175: Cleaning up "second-102000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-102000
panic.go:523: *** TestMinikubeProfile FAILED at 2023-10-02 03:53:41.961346 -0700 PDT m=+790.577676524
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p first-100000 -n first-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p first-100000 -n first-100000: exit status 6 (128.436403ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 03:53:42.080484   12173 status.go:415] kubeconfig endpoint: extract IP: "first-100000" does not appear in /Users/jenkins/minikube-integration/17340-9782/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "first-100000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "first-100000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-100000
E1002 03:53:43.302225   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-100000: (5.297118369s)
--- FAIL: TestMinikubeProfile (21.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2004673963.exe start -p stopped-upgrade-005000 --memory=2200 --vm-driver=hyperkit 
E1002 04:16:23.154513   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:16:34.004206   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2004673963.exe start -p stopped-upgrade-005000 --memory=2200 --vm-driver=hyperkit : (1m25.442986373s)
version_upgrade_test.go:205: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2004673963.exe -p stopped-upgrade-005000 stop
version_upgrade_test.go:205: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.2004673963.exe -p stopped-upgrade-005000 stop: (8.085986839s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-005000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1002 04:17:45.078185   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p stopped-upgrade-005000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (23.325111053s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-005000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the hyperkit driver based on existing profile
	* Starting control plane node stopped-upgrade-005000 in cluster stopped-upgrade-005000
	* Restarting existing hyperkit VM for "stopped-upgrade-005000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 04:17:33.524627   14904 out.go:296] Setting OutFile to fd 1 ...
	I1002 04:17:33.524915   14904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:17:33.524921   14904 out.go:309] Setting ErrFile to fd 2...
	I1002 04:17:33.524925   14904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:17:33.525113   14904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 04:17:33.526462   14904 out.go:303] Setting JSON to false
	I1002 04:17:33.548644   14904 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6421,"bootTime":1696239032,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 04:17:33.548742   14904 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 04:17:33.570703   14904 out.go:177] * [stopped-upgrade-005000] minikube v1.31.2 on Darwin 14.0
	I1002 04:17:33.636367   14904 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 04:17:33.614991   14904 notify.go:220] Checking for updates...
	I1002 04:17:33.679669   14904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:17:33.722536   14904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 04:17:33.787437   14904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 04:17:33.863628   14904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 04:17:33.922539   14904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 04:17:33.961514   14904 config.go:182] Loaded profile config "stopped-upgrade-005000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 04:17:33.961556   14904 start_flags.go:686] config upgrade: Driver=hyperkit
	I1002 04:17:33.961571   14904 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 04:17:33.961694   14904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/stopped-upgrade-005000/config.json ...
	I1002 04:17:33.962923   14904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:17:33.962990   14904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:17:33.971974   14904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60562
	I1002 04:17:33.972382   14904 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:17:33.972790   14904 main.go:141] libmachine: Using API Version  1
	I1002 04:17:33.972809   14904 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:17:33.973047   14904 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:17:33.973162   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:33.993381   14904 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 04:17:34.014440   14904 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 04:17:34.014718   14904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:17:34.014752   14904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:17:34.022658   14904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60564
	I1002 04:17:34.023012   14904 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:17:34.023346   14904 main.go:141] libmachine: Using API Version  1
	I1002 04:17:34.023357   14904 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:17:34.023600   14904 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:17:34.023710   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:34.073672   14904 out.go:177] * Using the hyperkit driver based on existing profile
	I1002 04:17:34.095374   14904 start.go:298] selected driver: hyperkit
	I1002 04:17:34.095397   14904 start.go:902] validating driver "hyperkit" against &{Name:stopped-upgrade-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.70.55 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 04:17:34.095563   14904 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 04:17:34.099934   14904 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.100101   14904 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17340-9782/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1002 04:17:34.108690   14904 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1002 04:17:34.112604   14904 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:17:34.112623   14904 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1002 04:17:34.112699   14904 cni.go:84] Creating CNI manager for ""
	I1002 04:17:34.112718   14904 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 04:17:34.112725   14904 start_flags.go:321] config:
	{Name:stopped-upgrade-005000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.70.55 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 04:17:34.112883   14904 iso.go:125] acquiring lock: {Name:mkb1616e5312c7f7300d9edabdcb664e7c56c074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.155344   14904 out.go:177] * Starting control plane node stopped-upgrade-005000 in cluster stopped-upgrade-005000
	I1002 04:17:34.176525   14904 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1002 04:17:34.239734   14904 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1002 04:17:34.239862   14904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/stopped-upgrade-005000/config.json ...
	I1002 04:17:34.239957   14904 cache.go:107] acquiring lock: {Name:mk8c91a6d30e29cf2af17925aca310485e04e208 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.239989   14904 cache.go:107] acquiring lock: {Name:mkb2b0eff9868fc4f531fe6cbd18a537cfd574c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.240032   14904 cache.go:107] acquiring lock: {Name:mk9512a6cfb488a5a77ba785cb0cd0022fcb9ed8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.239975   14904 cache.go:107] acquiring lock: {Name:mkca834c6a96af79b67e4c2f6135afd242f71a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.240126   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1002 04:17:34.240149   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1002 04:17:34.240151   14904 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 197.332µs
	I1002 04:17:34.240168   14904 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1002 04:17:34.240165   14904 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 200.924µs
	I1002 04:17:34.240179   14904 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1002 04:17:34.240194   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1002 04:17:34.240752   14904 cache.go:107] acquiring lock: {Name:mka538253964b5e86593ad932922980da69bb236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.240713   14904 cache.go:107] acquiring lock: {Name:mk7343ed73cb908118215418b1fcc3977fe44b17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.240995   14904 cache.go:107] acquiring lock: {Name:mk32caa20183f76db9ba5f4da3e80a13626c122b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.241014   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 04:17:34.241085   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1002 04:17:34.241054   14904 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.080011ms
	I1002 04:17:34.241102   14904 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 04:17:34.241102   14904 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 506.143µs
	I1002 04:17:34.241124   14904 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1002 04:17:34.240807   14904 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 677.705µs
	I1002 04:17:34.241173   14904 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1002 04:17:34.241103   14904 cache.go:107] acquiring lock: {Name:mk5001cc7eb25bea578707f8a9dd5d4efab5ae8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:17:34.241172   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1002 04:17:34.241195   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1002 04:17:34.241196   14904 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.088522ms
	I1002 04:17:34.241207   14904 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.053133ms
	I1002 04:17:34.241215   14904 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1002 04:17:34.241219   14904 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1002 04:17:34.241232   14904 start.go:365] acquiring machines lock for stopped-upgrade-005000: {Name:mk5657db51c0d6006a9e01bb2a1802e115658af0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 04:17:34.241329   14904 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1002 04:17:34.241350   14904 start.go:369] acquired machines lock for "stopped-upgrade-005000" in 84.982µs
	I1002 04:17:34.241342   14904 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.222882ms
	I1002 04:17:34.241369   14904 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1002 04:17:34.241384   14904 start.go:96] Skipping create...Using existing machine configuration
	I1002 04:17:34.241385   14904 cache.go:87] Successfully saved all images to host disk.
	I1002 04:17:34.241399   14904 fix.go:54] fixHost starting: minikube
	I1002 04:17:34.242153   14904 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:17:34.242179   14904 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:17:34.251460   14904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60566
	I1002 04:17:34.251832   14904 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:17:34.252279   14904 main.go:141] libmachine: Using API Version  1
	I1002 04:17:34.252296   14904 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:17:34.252541   14904 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:17:34.252678   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:34.252963   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetState
	I1002 04:17:34.253160   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:17:34.253298   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | hyperkit pid from json: 14743
	I1002 04:17:34.254448   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | hyperkit pid 14743 missing from process table
	I1002 04:17:34.254482   14904 fix.go:102] recreateIfNeeded on stopped-upgrade-005000: state=Stopped err=<nil>
	I1002 04:17:34.254499   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	W1002 04:17:34.254584   14904 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 04:17:34.296524   14904 out.go:177] * Restarting existing hyperkit VM for "stopped-upgrade-005000" ...
	I1002 04:17:34.318434   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .Start
	I1002 04:17:34.318760   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:17:34.318960   14904 main.go:141] libmachine: (stopped-upgrade-005000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/hyperkit.pid
	I1002 04:17:34.320630   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | hyperkit pid 14743 missing from process table
	I1002 04:17:34.320701   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | pid 14743 is in state "Stopped"
	I1002 04:17:34.320837   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/hyperkit.pid...
	I1002 04:17:34.320919   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Using UUID 111880cc-6115-11ee-b885-149d997cd0f1
	I1002 04:17:34.340238   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Generated MAC e6:3e:57:75:71:61
	I1002 04:17:34.340270   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-005000
	I1002 04:17:34.340521   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"111880cc-6115-11ee-b885-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003eff80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I1002 04:17:34.340585   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"111880cc-6115-11ee-b885-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003eff80)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I1002 04:17:34.340650   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "111880cc-6115-11ee-b885-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/stopped-upgrade-005000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/
bzimage,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-005000"}
	I1002 04:17:34.340734   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 111880cc-6115-11ee-b885-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/stopped-upgrade-005000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/console-ring -f kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/bzimage,/Users/jenkins/minikube-integration/17340-9782/.miniku
be/machines/stopped-upgrade-005000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=stopped-upgrade-005000"
	I1002 04:17:34.340757   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1002 04:17:34.342511   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 DEBUG: hyperkit: Pid is 14916
	I1002 04:17:34.343010   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Attempt 0
	I1002 04:17:34.343026   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:17:34.343838   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | hyperkit pid from json: 14916
	I1002 04:17:34.345434   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Searching for e6:3e:57:75:71:61 in /var/db/dhcpd_leases ...
	I1002 04:17:34.345589   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Found 70 entries in /var/db/dhcpd_leases!
	I1002 04:17:34.345626   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651bf823}
	I1002 04:17:34.345656   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf801}
	I1002 04:17:34.345670   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | Found match: e6:3e:57:75:71:61
	I1002 04:17:34.345680   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | IP: 192.168.70.55
	I1002 04:17:34.345731   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetConfigRaw
	I1002 04:17:34.346725   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetIP
	I1002 04:17:34.346952   14904 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/stopped-upgrade-005000/config.json ...
	I1002 04:17:34.347410   14904 machine.go:88] provisioning docker machine ...
	I1002 04:17:34.347435   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:34.347667   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetMachineName
	I1002 04:17:34.347863   14904 buildroot.go:166] provisioning hostname "stopped-upgrade-005000"
	I1002 04:17:34.347915   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetMachineName
	I1002 04:17:34.348136   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:34.348297   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:34.348491   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:34.348646   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:34.348804   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:34.349013   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:34.349329   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:34.349342   14904 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-005000 && echo "stopped-upgrade-005000" | sudo tee /etc/hostname
	I1002 04:17:34.353707   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1002 04:17:34.363748   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1002 04:17:34.364936   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:17:34.364964   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:17:34.364979   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:17:34.364990   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:17:34.757032   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1002 04:17:34.862409   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:17:34.862442   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:17:34.862463   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:17:34.862483   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:17:34.863288   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1002 04:17:49.664110   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:49 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1002 04:17:49.664138   14904 main.go:141] libmachine: (stopped-upgrade-005000) DBG | 2023/10/02 04:17:49 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1002 04:17:53.942158   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-005000
	
	I1002 04:17:53.942178   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:53.942354   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:53.942469   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:53.942592   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:53.942685   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:53.942834   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:53.943094   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:53.943107   14904 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-005000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-005000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-005000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 04:17:54.020201   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 04:17:54.020220   14904 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-9782/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-9782/.minikube}
	I1002 04:17:54.020240   14904 buildroot.go:174] setting up certificates
	I1002 04:17:54.020250   14904 provision.go:83] configureAuth start
	I1002 04:17:54.020257   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetMachineName
	I1002 04:17:54.020393   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetIP
	I1002 04:17:54.020489   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.020566   14904 provision.go:138] copyHostCerts
	I1002 04:17:54.020639   14904 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem, removing ...
	I1002 04:17:54.020651   14904 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem
	I1002 04:17:54.061711   14904 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem (1078 bytes)
	I1002 04:17:54.099479   14904 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem, removing ...
	I1002 04:17:54.099493   14904 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem
	I1002 04:17:54.099654   14904 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem (1123 bytes)
	I1002 04:17:54.099936   14904 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem, removing ...
	I1002 04:17:54.099945   14904 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem
	I1002 04:17:54.100036   14904 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem (1679 bytes)
	I1002 04:17:54.100231   14904 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-005000 san=[192.168.70.55 192.168.70.55 localhost 127.0.0.1 minikube stopped-upgrade-005000]
	I1002 04:17:54.265713   14904 provision.go:172] copyRemoteCerts
	I1002 04:17:54.265767   14904 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 04:17:54.265784   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.265886   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:54.265981   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.266068   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:54.266165   14904 sshutil.go:53] new ssh client: &{IP:192.168.70.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/id_rsa Username:docker}
	I1002 04:17:54.307289   14904 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 04:17:54.316809   14904 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 04:17:54.325560   14904 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 04:17:54.334233   14904 provision.go:86] duration metric: configureAuth took 313.971524ms
	I1002 04:17:54.334243   14904 buildroot.go:189] setting minikube options for container-runtime
	I1002 04:17:54.334346   14904 config.go:182] Loaded profile config "stopped-upgrade-005000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 04:17:54.334361   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:54.334507   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.334596   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:54.334684   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.334767   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.334854   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:54.334963   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:54.335197   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:54.335206   14904 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 04:17:54.409886   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 04:17:54.409898   14904 buildroot.go:70] root file system type: tmpfs
	I1002 04:17:54.409978   14904 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 04:17:54.409994   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.410131   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:54.410223   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.410334   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.410412   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:54.410548   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:54.410785   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:54.410832   14904 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 04:17:54.489881   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 04:17:54.489901   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.490032   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:54.490140   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.490253   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.490361   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:54.490488   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:54.490737   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:54.490749   14904 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 04:17:54.978460   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 04:17:54.978476   14904 machine.go:91] provisioned docker machine in 20.630656576s
	I1002 04:17:54.978488   14904 start.go:300] post-start starting for "stopped-upgrade-005000" (driver="hyperkit")
	I1002 04:17:54.978498   14904 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 04:17:54.978509   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:54.978699   14904 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 04:17:54.978721   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:54.978817   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:54.978926   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:54.979022   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:54.979111   14904 sshutil.go:53] new ssh client: &{IP:192.168.70.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/id_rsa Username:docker}
	I1002 04:17:55.019668   14904 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 04:17:55.022293   14904 info.go:137] Remote host: Buildroot 2019.02.7
	I1002 04:17:55.022305   14904 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/addons for local assets ...
	I1002 04:17:55.022389   14904 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/files for local assets ...
	I1002 04:17:55.022548   14904 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem -> 102442.pem in /etc/ssl/certs
	I1002 04:17:55.022720   14904 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 04:17:55.026276   14904 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem --> /etc/ssl/certs/102442.pem (1708 bytes)
	I1002 04:17:55.035075   14904 start.go:303] post-start completed in 56.579999ms
	I1002 04:17:55.035087   14904 fix.go:56] fixHost completed within 20.793287688s
	I1002 04:17:55.035102   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:55.035231   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:55.035326   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:55.035418   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:55.035502   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:55.035610   14904 main.go:141] libmachine: Using SSH client type: native
	I1002 04:17:55.035852   14904 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.55 22 <nil> <nil>}
	I1002 04:17:55.035862   14904 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 04:17:55.110888   14904 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696245474.312676462
	
	I1002 04:17:55.110905   14904 fix.go:206] guest clock: 1696245474.312676462
	I1002 04:17:55.110911   14904 fix.go:219] Guest: 2023-10-02 04:17:54.312676462 -0700 PDT Remote: 2023-10-02 04:17:55.035091 -0700 PDT m=+21.540897856 (delta=-722.414538ms)
	I1002 04:17:55.110934   14904 fix.go:190] guest clock delta is within tolerance: -722.414538ms
	I1002 04:17:55.110939   14904 start.go:83] releasing machines lock for "stopped-upgrade-005000", held for 20.869173224s
	I1002 04:17:55.110960   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:55.111114   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetIP
	I1002 04:17:55.111221   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:55.111543   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:55.111668   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .DriverName
	I1002 04:17:55.111735   14904 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 04:17:55.111765   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:55.111835   14904 ssh_runner.go:195] Run: cat /version.json
	I1002 04:17:55.111847   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHHostname
	I1002 04:17:55.111862   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:55.111977   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHPort
	I1002 04:17:55.112014   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:55.112131   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:55.112157   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHKeyPath
	I1002 04:17:55.112246   14904 sshutil.go:53] new ssh client: &{IP:192.168.70.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/id_rsa Username:docker}
	I1002 04:17:55.112288   14904 main.go:141] libmachine: (stopped-upgrade-005000) Calling .GetSSHUsername
	I1002 04:17:55.112387   14904 sshutil.go:53] new ssh client: &{IP:192.168.70.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/stopped-upgrade-005000/id_rsa Username:docker}
	W1002 04:17:55.200666   14904 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 04:17:55.200774   14904 ssh_runner.go:195] Run: systemctl --version
	I1002 04:17:55.204257   14904 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 04:17:55.208516   14904 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 04:17:55.208583   14904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 04:17:55.212596   14904 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 04:17:55.216407   14904 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1002 04:17:55.216426   14904 start.go:469] detecting cgroup driver to use...
	I1002 04:17:55.216521   14904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:17:55.224483   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1002 04:17:55.229544   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 04:17:55.234455   14904 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 04:17:55.234549   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 04:17:55.239361   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:17:55.244119   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 04:17:55.248596   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:17:55.253984   14904 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 04:17:55.258714   14904 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 04:17:55.263399   14904 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 04:17:55.267402   14904 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 04:17:55.271573   14904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:17:55.339677   14904 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 04:17:55.349569   14904 start.go:469] detecting cgroup driver to use...
	I1002 04:17:55.349634   14904 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 04:17:55.358032   14904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:17:55.367176   14904 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 04:17:55.386346   14904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:17:55.395527   14904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 04:17:55.404262   14904 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:17:55.412157   14904 ssh_runner.go:195] Run: which cri-dockerd
	I1002 04:17:55.414464   14904 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 04:17:55.418499   14904 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 04:17:55.425097   14904 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 04:17:55.484865   14904 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 04:17:55.555228   14904 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 04:17:55.555371   14904 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 04:17:55.562410   14904 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:17:55.627166   14904 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 04:17:56.668517   14904 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.041300854s)
	I1002 04:17:56.699890   14904 out.go:177] 
	W1002 04:17:56.722060   14904 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W1002 04:17:56.722072   14904 out.go:239] * 
	* 
	W1002 04:17:56.722740   14904 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 04:17:56.767178   14904 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p stopped-upgrade-005000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (116.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (15.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : exit status 90 (15.258263213s)

                                                
                                                
-- stdout --
	* [auto-766000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node auto-766000 in cluster auto-766000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 04:19:32.611043   15148 out.go:296] Setting OutFile to fd 1 ...
	I1002 04:19:32.611337   15148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:19:32.611343   15148 out.go:309] Setting ErrFile to fd 2...
	I1002 04:19:32.611349   15148 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:19:32.611584   15148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 04:19:32.613476   15148 out.go:303] Setting JSON to false
	I1002 04:19:32.636304   15148 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":6540,"bootTime":1696239032,"procs":501,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 04:19:32.637131   15148 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 04:19:32.675449   15148 out.go:177] * [auto-766000] minikube v1.31.2 on Darwin 14.0
	I1002 04:19:32.790296   15148 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 04:19:32.752399   15148 notify.go:220] Checking for updates...
	I1002 04:19:32.847928   15148 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:19:32.922242   15148 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 04:19:32.963966   15148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 04:19:33.022105   15148 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 04:19:33.081048   15148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 04:19:33.102324   15148 config.go:182] Loaded profile config "NoKubernetes-875000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1002 04:19:33.102416   15148 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 04:19:33.131084   15148 out.go:177] * Using the hyperkit driver based on user configuration
	I1002 04:19:33.152100   15148 start.go:298] selected driver: hyperkit
	I1002 04:19:33.152118   15148 start.go:902] validating driver "hyperkit" against <nil>
	I1002 04:19:33.152130   15148 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 04:19:33.155221   15148 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:19:33.155330   15148 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17340-9782/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1002 04:19:33.163208   15148 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1002 04:19:33.167145   15148 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:19:33.167164   15148 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1002 04:19:33.167201   15148 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 04:19:33.167400   15148 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 04:19:33.167436   15148 cni.go:84] Creating CNI manager for ""
	I1002 04:19:33.167453   15148 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 04:19:33.167459   15148 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 04:19:33.167466   15148 start_flags.go:321] config:
	{Name:auto-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 04:19:33.167603   15148 iso.go:125] acquiring lock: {Name:mkb1616e5312c7f7300d9edabdcb664e7c56c074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:19:33.210038   15148 out.go:177] * Starting control plane node auto-766000 in cluster auto-766000
	I1002 04:19:33.231039   15148 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 04:19:33.231069   15148 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 04:19:33.231084   15148 cache.go:57] Caching tarball of preloaded images
	I1002 04:19:33.231180   15148 preload.go:174] Found /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 04:19:33.231189   15148 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 04:19:33.231294   15148 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/auto-766000/config.json ...
	I1002 04:19:33.231316   15148 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/auto-766000/config.json: {Name:mk1e0b50430e83ae30917ed5dfe8e1af094cabc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:19:33.231632   15148 start.go:365] acquiring machines lock for auto-766000: {Name:mk5657db51c0d6006a9e01bb2a1802e115658af0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 04:19:33.231682   15148 start.go:369] acquired machines lock for "auto-766000" in 40.138µs
	I1002 04:19:33.231698   15148 start.go:93] Provisioning new machine with config: &{Name:auto-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.2 ClusterName:auto-766000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 04:19:33.231749   15148 start.go:125] createHost starting for "" (driver="hyperkit")
	I1002 04:19:33.252886   15148 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1002 04:19:33.253144   15148 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:19:33.253189   15148 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:19:33.261023   15148 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:60884
	I1002 04:19:33.261373   15148 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:19:33.261785   15148 main.go:141] libmachine: Using API Version  1
	I1002 04:19:33.261799   15148 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:19:33.262044   15148 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:19:33.262174   15148 main.go:141] libmachine: (auto-766000) Calling .GetMachineName
	I1002 04:19:33.262260   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:33.262363   15148 start.go:159] libmachine.API.Create for "auto-766000" (driver="hyperkit")
	I1002 04:19:33.262395   15148 client.go:168] LocalClient.Create starting
	I1002 04:19:33.262428   15148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem
	I1002 04:19:33.262471   15148 main.go:141] libmachine: Decoding PEM data...
	I1002 04:19:33.262488   15148 main.go:141] libmachine: Parsing certificate...
	I1002 04:19:33.262552   15148 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem
	I1002 04:19:33.262577   15148 main.go:141] libmachine: Decoding PEM data...
	I1002 04:19:33.262590   15148 main.go:141] libmachine: Parsing certificate...
	I1002 04:19:33.262606   15148 main.go:141] libmachine: Running pre-create checks...
	I1002 04:19:33.262617   15148 main.go:141] libmachine: (auto-766000) Calling .PreCreateCheck
	I1002 04:19:33.262696   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:33.262889   15148 main.go:141] libmachine: (auto-766000) Calling .GetConfigRaw
	I1002 04:19:33.274237   15148 main.go:141] libmachine: Creating machine...
	I1002 04:19:33.274250   15148 main.go:141] libmachine: (auto-766000) Calling .Create
	I1002 04:19:33.274373   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:33.274540   15148 main.go:141] libmachine: (auto-766000) DBG | I1002 04:19:33.274372   15156 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 04:19:33.274605   15148 main.go:141] libmachine: (auto-766000) Downloading /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17340-9782/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 04:19:33.443943   15148 main.go:141] libmachine: (auto-766000) DBG | I1002 04:19:33.443809   15156 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/id_rsa...
	I1002 04:19:33.576637   15148 main.go:141] libmachine: (auto-766000) DBG | I1002 04:19:33.576553   15156 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/auto-766000.rawdisk...
	I1002 04:19:33.576652   15148 main.go:141] libmachine: (auto-766000) DBG | Writing magic tar header
	I1002 04:19:33.576660   15148 main.go:141] libmachine: (auto-766000) DBG | Writing SSH key tar header
	I1002 04:19:33.577271   15148 main.go:141] libmachine: (auto-766000) DBG | I1002 04:19:33.577233   15156 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000 ...
	I1002 04:19:34.027790   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:34.027814   15148 main.go:141] libmachine: (auto-766000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/hyperkit.pid
	I1002 04:19:34.027828   15148 main.go:141] libmachine: (auto-766000) DBG | Using UUID 8ff5e772-6115-11ee-adf3-149d997cd0f1
	I1002 04:19:34.047500   15148 main.go:141] libmachine: (auto-766000) DBG | Generated MAC ca:c3:3f:1c:55:a
	I1002 04:19:34.047522   15148 main.go:141] libmachine: (auto-766000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-766000
	I1002 04:19:34.047565   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8ff5e772-6115-11ee-adf3-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110360)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1002 04:19:34.047605   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"8ff5e772-6115-11ee-adf3-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000110360)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I1002 04:19:34.047646   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "8ff5e772-6115-11ee-adf3-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/auto-766000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/bzimage,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/in
itrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-766000"}
	I1002 04:19:34.047689   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 8ff5e772-6115-11ee-adf3-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/auto-766000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/console-ring -f kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/bzimage,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0
noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-766000"
	I1002 04:19:34.047702   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1002 04:19:34.050512   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 DEBUG: hyperkit: Pid is 15157
	I1002 04:19:34.050932   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 0
	I1002 04:19:34.050944   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:34.051037   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:34.052019   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:34.052163   15148 main.go:141] libmachine: (auto-766000) DBG | Found 73 entries in /var/db/dhcpd_leases!
	I1002 04:19:34.052175   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.58 HWAddress:42:37:9a:9a:81:4e ID:1,42:37:9a:9a:81:4e Lease:0x651bf8c2}
	I1002 04:19:34.052203   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.57 HWAddress:8e:d3:8c:17:c0:7d ID:1,8e:d3:8c:17:c0:7d Lease:0x651aa722}
	I1002 04:19:34.052220   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.56 HWAddress:d6:30:42:4e:d1:bb ID:1,d6:30:42:4e:d1:bb Lease:0x651bf86c}
	I1002 04:19:34.052242   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf860}
	I1002 04:19:34.052254   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651aa6e0}
	I1002 04:19:34.052268   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.53 HWAddress:f2:40:42:2a:9e:b9 ID:1,f2:40:42:2a:9e:b9 Lease:0x651bf741}
	I1002 04:19:34.052284   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.52 HWAddress:1e:1e:85:26:e8:d1 ID:1,1e:1e:85:26:e8:d1 Lease:0x651aa5ac}
	I1002 04:19:34.052297   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.51 HWAddress:82:31:7f:2c:92:61 ID:1,82:31:7f:2c:92:61 Lease:0x651bf703}
	I1002 04:19:34.052310   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.50 HWAddress:92:85:3:c0:9b:8b ID:1,92:85:3:c0:9b:8b Lease:0x651bf6e8}
	I1002 04:19:34.052324   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.49 HWAddress:a6:eb:1:c3:3f:33 ID:1,a6:eb:1:c3:3f:33 Lease:0x651aa578}
	I1002 04:19:34.052337   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.48 HWAddress:b6:d4:ee:80:e3:7c ID:1,b6:d4:ee:80:e3:7c Lease:0x651bf6b7}
	I1002 04:19:34.052349   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.47 HWAddress:3a:ed:29:91:4a:d2 ID:1,3a:ed:29:91:4a:d2 Lease:0x651bf6a2}
	I1002 04:19:34.052368   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.46 HWAddress:ba:a0:4a:72:ba:62 ID:1,ba:a0:4a:72:ba:62 Lease:0x651bf636}
	I1002 04:19:34.052380   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.45 HWAddress:f2:10:90:e6:b6:f7 ID:1,f2:10:90:e6:b6:f7 Lease:0x651bf5ca}
	I1002 04:19:34.052396   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.44 HWAddress:ea:4c:aa:8:e4:9e ID:1,ea:4c:aa:8:e4:9e Lease:0x651bf57b}
	I1002 04:19:34.052405   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.43 HWAddress:e:6a:a3:fe:d2:cb ID:1,e:6a:a3:fe:d2:cb Lease:0x651aa394}
	I1002 04:19:34.052419   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.42 HWAddress:d6:a7:4a:88:4e:ce ID:1,d6:a7:4a:88:4e:ce Lease:0x651aa2da}
	I1002 04:19:34.052435   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.41 HWAddress:42:af:87:39:6e:40 ID:1,42:af:87:39:6e:40 Lease:0x651bf4c2}
	I1002 04:19:34.052446   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.40 HWAddress:be:0:2f:ae:61:a6 ID:1,be:0:2f:ae:61:a6 Lease:0x651bf475}
	I1002 04:19:34.052458   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.39 HWAddress:fa:4e:36:7c:45:59 ID:1,fa:4e:36:7c:45:59 Lease:0x651aa173}
	I1002 04:19:34.052471   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.38 HWAddress:be:31:47:3d:af:ca ID:1,be:31:47:3d:af:ca Lease:0x651aa15d}
	I1002 04:19:34.052486   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.37 HWAddress:ce:fe:4c:b:20:0 ID:1,ce:fe:4c:b:20:0 Lease:0x651bf2b0}
	I1002 04:19:34.052496   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.36 HWAddress:f6:d3:d8:c5:b:4 ID:1,f6:d3:d8:c5:b:4 Lease:0x651bf273}
	I1002 04:19:34.052518   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.35 HWAddress:e2:9b:39:3b:b6:81 ID:1,e2:9b:39:3b:b6:81 Lease:0x651bf1d3}
	I1002 04:19:34.052530   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.34 HWAddress:f2:d6:6e:8c:56:79 ID:1,f2:d6:6e:8c:56:79 Lease:0x651bf1bb}
	I1002 04:19:34.052542   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.33 HWAddress:3a:12:4c:79:5d:43 ID:1,3a:12:4c:79:5d:43 Lease:0x651bf0d4}
	I1002 04:19:34.052550   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.32 HWAddress:de:c1:60:39:14:91 ID:1,de:c1:60:39:14:91 Lease:0x651a9f49}
	I1002 04:19:34.052570   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.31 HWAddress:26:f:15:87:ad:4e ID:1,26:f:15:87:ad:4e Lease:0x651befb8}
	I1002 04:19:34.052582   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.30 HWAddress:6a:0:7f:10:d4:d9 ID:1,6a:0:7f:10:d4:d9 Lease:0x651beeba}
	I1002 04:19:34.052592   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.29 HWAddress:52:d3:be:bc:4f:c2 ID:1,52:d3:be:bc:4f:c2 Lease:0x651bee49}
	I1002 04:19:34.052619   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.28 HWAddress:3a:e8:1f:a6:a4:63 ID:1,3a:e8:1f:a6:a4:63 Lease:0x651bed41}
	I1002 04:19:34.052628   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.27 HWAddress:d2:4e:a:29:75:a7 ID:1,d2:4e:a:29:75:a7 Lease:0x651bebc0}
	I1002 04:19:34.052641   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.26 HWAddress:2a:21:83:2d:61:52 ID:1,2a:21:83:2d:61:52 Lease:0x651bec15}
	I1002 04:19:34.052655   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.25 HWAddress:8a:7d:ad:ea:52:8f ID:1,8a:7d:ad:ea:52:8f Lease:0x651beb1d}
	I1002 04:19:34.052663   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.24 HWAddress:7a:91:f7:be:fd:e3 ID:1,7a:91:f7:be:fd:e3 Lease:0x651beb01}
	I1002 04:19:34.052671   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.23 HWAddress:2e:f4:d7:73:da:57 ID:1,2e:f4:d7:73:da:57 Lease:0x651beac6}
	I1002 04:19:34.052679   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.22 HWAddress:e2:e6:83:39:ae:b1 ID:1,e2:e6:83:39:ae:b1 Lease:0x651beaa8}
	I1002 04:19:34.052687   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.21 HWAddress:ce:bf:5f:b1:ac:25 ID:1,ce:bf:5f:b1:ac:25 Lease:0x651bea97}
	I1002 04:19:34.052695   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.20 HWAddress:ca:d:2a:ac:b1:6 ID:1,ca:d:2a:ac:b1:6 Lease:0x651bea88}
	I1002 04:19:34.052703   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.19 HWAddress:52:f5:58:b1:ed:72 ID:1,52:f5:58:b1:ed:72 Lease:0x651bea3a}
	I1002 04:19:34.052711   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.18 HWAddress:12:d5:a9:d3:2d:62 ID:1,12:d5:a9:d3:2d:62 Lease:0x651bea2e}
	I1002 04:19:34.052718   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.17 HWAddress:8e:72:e7:d4:b0:8b ID:1,8e:72:e7:d4:b0:8b Lease:0x651a98a3}
	I1002 04:19:34.052739   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.16 HWAddress:f6:1c:a1:3f:3a:af ID:1,f6:1c:a1:3f:3a:af Lease:0x651be9e8}
	I1002 04:19:34.052758   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.15 HWAddress:c6:8:4d:2b:4b:5d ID:1,c6:8:4d:2b:4b:5d Lease:0x651a987d}
	I1002 04:19:34.052777   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.14 HWAddress:22:b5:88:68:b3:50 ID:1,22:b5:88:68:b3:50 Lease:0x651be97d}
	I1002 04:19:34.052791   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.13 HWAddress:92:79:a6:ba:9b:af ID:1,92:79:a6:ba:9b:af Lease:0x651be99e}
	I1002 04:19:34.052800   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.12 HWAddress:26:7c:c7:f5:d5:85 ID:1,26:7c:c7:f5:d5:85 Lease:0x651a97f2}
	I1002 04:19:34.052809   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.11 HWAddress:da:69:84:ff:8a:c9 ID:1,da:69:84:ff:8a:c9 Lease:0x651be87c}
	I1002 04:19:34.052817   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.10 HWAddress:ee:b0:43:fa:b6:b5 ID:1,ee:b0:43:fa:b6:b5 Lease:0x651be85c}
	I1002 04:19:34.052826   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.9 HWAddress:b6:64:53:57:2a:86 ID:1,b6:64:53:57:2a:86 Lease:0x651be843}
	I1002 04:19:34.052834   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.8 HWAddress:2a:f4:7:2c:43:de ID:1,2a:f4:7:2c:43:de Lease:0x651be835}
	I1002 04:19:34.052842   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.7 HWAddress:8e:a8:11:c9:a1:e5 ID:1,8e:a8:11:c9:a1:e5 Lease:0x651be820}
	I1002 04:19:34.052849   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.6 HWAddress:96:80:f7:c:df:d8 ID:1,96:80:f7:c:df:d8 Lease:0x651a9696}
	I1002 04:19:34.052857   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.5 HWAddress:16:a0:fc:26:e:40 ID:1,16:a0:fc:26:e:40 Lease:0x651be7e5}
	I1002 04:19:34.052877   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.4 HWAddress:ae:5d:4a:2f:b:74 ID:1,ae:5d:4a:2f:b:74 Lease:0x651be77b}
	I1002 04:19:34.052896   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.3 HWAddress:ae:e6:9d:b3:23:84 ID:1,ae:e6:9d:b3:23:84 Lease:0x651be710}
	I1002 04:19:34.052914   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.2 HWAddress:72:88:6:ff:96:d3 ID:1,72:88:6:ff:96:d3 Lease:0x651be6d8}
	I1002 04:19:34.052928   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.69.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be649}
	I1002 04:19:34.052940   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.15 HWAddress:22:ed:a8:a4:a2:69 ID:1,22:ed:a8:a4:a2:69 Lease:0x651be62b}
	I1002 04:19:34.052951   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.14 HWAddress:1e:b6:d4:aa:d5:7 ID:1,1e:b6:d4:aa:d5:7 Lease:0x651a9440}
	I1002 04:19:34.052959   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.13 HWAddress:5a:91:c6:5:e0:24 ID:1,5a:91:c6:5:e0:24 Lease:0x651be60d}
	I1002 04:19:34.052968   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.12 HWAddress:f6:6d:98:92:1:a9 ID:1,f6:6d:98:92:1:a9 Lease:0x651be5da}
	I1002 04:19:34.052974   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.11 HWAddress:7e:eb:66:8f:41:b3 ID:1,7e:eb:66:8f:41:b3 Lease:0x651a930e}
	I1002 04:19:34.052983   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.10 HWAddress:c2:39:e0:92:22:c6 ID:1,c2:39:e0:92:22:c6 Lease:0x651a92e0}
	I1002 04:19:34.052991   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.9 HWAddress:9a:6:5a:80:5e:aa ID:1,9a:6:5a:80:5e:aa Lease:0x651be419}
	I1002 04:19:34.052999   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.8 HWAddress:e2:fd:a3:90:3:c1 ID:1,e2:fd:a3:90:3:c1 Lease:0x651be3f2}
	I1002 04:19:34.053010   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.7 HWAddress:f6:c5:4d:b6:2d:eb ID:1,f6:c5:4d:b6:2d:eb Lease:0x651be38e}
	I1002 04:19:34.053017   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.6 HWAddress:a:a6:e5:5f:7e:77 ID:1,a:a6:e5:5f:7e:77 Lease:0x651be31e}
	I1002 04:19:34.053032   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.5 HWAddress:b6:3d:e6:50:d:a4 ID:1,b6:3d:e6:50:d:a4 Lease:0x651be2ee}
	I1002 04:19:34.053042   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.4 HWAddress:ee:d4:6c:2:6f:f5 ID:1,ee:d4:6c:2:6f:f5 Lease:0x651be211}
	I1002 04:19:34.053049   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.3 HWAddress:f6:86:f1:2b:db:97 ID:1,f6:86:f1:2b:db:97 Lease:0x651a9086}
	I1002 04:19:34.053058   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.2 HWAddress:f2:d2:31:bc:71:a1 ID:1,f2:d2:31:bc:71:a1 Lease:0x651be0ee}
	I1002 04:19:34.053068   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.67.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be0b9}
	I1002 04:19:34.057844   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1002 04:19:34.066673   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1002 04:19:34.067528   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:19:34.067547   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:19:34.067576   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:19:34.067599   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:19:34.454885   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1002 04:19:34.454922   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1002 04:19:34.559143   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:19:34.559172   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:19:34.559191   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:19:34.559199   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:19:34.560006   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1002 04:19:34.560017   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1002 04:19:36.054609   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 1
	I1002 04:19:36.054626   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:36.054751   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:36.055625   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:36.055728   15148 main.go:141] libmachine: (auto-766000) DBG | Found 73 entries in /var/db/dhcpd_leases!
	I1002 04:19:36.055736   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.58 HWAddress:42:37:9a:9a:81:4e ID:1,42:37:9a:9a:81:4e Lease:0x651bf8c2}
	I1002 04:19:36.055756   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.57 HWAddress:8e:d3:8c:17:c0:7d ID:1,8e:d3:8c:17:c0:7d Lease:0x651aa722}
	I1002 04:19:36.055766   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.56 HWAddress:d6:30:42:4e:d1:bb ID:1,d6:30:42:4e:d1:bb Lease:0x651bf86c}
	I1002 04:19:36.055774   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf860}
	I1002 04:19:36.055782   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651aa6e0}
	I1002 04:19:36.055793   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.53 HWAddress:f2:40:42:2a:9e:b9 ID:1,f2:40:42:2a:9e:b9 Lease:0x651bf741}
	I1002 04:19:36.055801   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.52 HWAddress:1e:1e:85:26:e8:d1 ID:1,1e:1e:85:26:e8:d1 Lease:0x651aa5ac}
	I1002 04:19:36.055810   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.51 HWAddress:82:31:7f:2c:92:61 ID:1,82:31:7f:2c:92:61 Lease:0x651bf703}
	I1002 04:19:36.055819   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.50 HWAddress:92:85:3:c0:9b:8b ID:1,92:85:3:c0:9b:8b Lease:0x651bf6e8}
	I1002 04:19:36.055829   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.49 HWAddress:a6:eb:1:c3:3f:33 ID:1,a6:eb:1:c3:3f:33 Lease:0x651aa578}
	I1002 04:19:36.055854   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.48 HWAddress:b6:d4:ee:80:e3:7c ID:1,b6:d4:ee:80:e3:7c Lease:0x651bf6b7}
	I1002 04:19:36.055869   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.47 HWAddress:3a:ed:29:91:4a:d2 ID:1,3a:ed:29:91:4a:d2 Lease:0x651bf6a2}
	I1002 04:19:36.055877   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.46 HWAddress:ba:a0:4a:72:ba:62 ID:1,ba:a0:4a:72:ba:62 Lease:0x651bf636}
	I1002 04:19:36.055884   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.45 HWAddress:f2:10:90:e6:b6:f7 ID:1,f2:10:90:e6:b6:f7 Lease:0x651bf5ca}
	I1002 04:19:36.055892   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.44 HWAddress:ea:4c:aa:8:e4:9e ID:1,ea:4c:aa:8:e4:9e Lease:0x651bf57b}
	I1002 04:19:36.055901   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.43 HWAddress:e:6a:a3:fe:d2:cb ID:1,e:6a:a3:fe:d2:cb Lease:0x651aa394}
	I1002 04:19:36.055909   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.42 HWAddress:d6:a7:4a:88:4e:ce ID:1,d6:a7:4a:88:4e:ce Lease:0x651aa2da}
	I1002 04:19:36.055916   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.41 HWAddress:42:af:87:39:6e:40 ID:1,42:af:87:39:6e:40 Lease:0x651bf4c2}
	I1002 04:19:36.055924   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.40 HWAddress:be:0:2f:ae:61:a6 ID:1,be:0:2f:ae:61:a6 Lease:0x651bf475}
	I1002 04:19:36.055930   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.39 HWAddress:fa:4e:36:7c:45:59 ID:1,fa:4e:36:7c:45:59 Lease:0x651aa173}
	I1002 04:19:36.055938   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.38 HWAddress:be:31:47:3d:af:ca ID:1,be:31:47:3d:af:ca Lease:0x651aa15d}
	I1002 04:19:36.055946   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.37 HWAddress:ce:fe:4c:b:20:0 ID:1,ce:fe:4c:b:20:0 Lease:0x651bf2b0}
	I1002 04:19:36.055954   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.36 HWAddress:f6:d3:d8:c5:b:4 ID:1,f6:d3:d8:c5:b:4 Lease:0x651bf273}
	I1002 04:19:36.055962   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.35 HWAddress:e2:9b:39:3b:b6:81 ID:1,e2:9b:39:3b:b6:81 Lease:0x651bf1d3}
	I1002 04:19:36.055973   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.34 HWAddress:f2:d6:6e:8c:56:79 ID:1,f2:d6:6e:8c:56:79 Lease:0x651bf1bb}
	I1002 04:19:36.055980   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.33 HWAddress:3a:12:4c:79:5d:43 ID:1,3a:12:4c:79:5d:43 Lease:0x651bf0d4}
	I1002 04:19:36.055989   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.32 HWAddress:de:c1:60:39:14:91 ID:1,de:c1:60:39:14:91 Lease:0x651a9f49}
	I1002 04:19:36.055996   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.31 HWAddress:26:f:15:87:ad:4e ID:1,26:f:15:87:ad:4e Lease:0x651befb8}
	I1002 04:19:36.056005   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.30 HWAddress:6a:0:7f:10:d4:d9 ID:1,6a:0:7f:10:d4:d9 Lease:0x651beeba}
	I1002 04:19:36.056013   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.29 HWAddress:52:d3:be:bc:4f:c2 ID:1,52:d3:be:bc:4f:c2 Lease:0x651bee49}
	I1002 04:19:36.056022   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.28 HWAddress:3a:e8:1f:a6:a4:63 ID:1,3a:e8:1f:a6:a4:63 Lease:0x651bed41}
	I1002 04:19:36.056029   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.27 HWAddress:d2:4e:a:29:75:a7 ID:1,d2:4e:a:29:75:a7 Lease:0x651bebc0}
	I1002 04:19:36.056037   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.26 HWAddress:2a:21:83:2d:61:52 ID:1,2a:21:83:2d:61:52 Lease:0x651bec15}
	I1002 04:19:36.056045   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.25 HWAddress:8a:7d:ad:ea:52:8f ID:1,8a:7d:ad:ea:52:8f Lease:0x651beb1d}
	I1002 04:19:36.056053   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.24 HWAddress:7a:91:f7:be:fd:e3 ID:1,7a:91:f7:be:fd:e3 Lease:0x651beb01}
	I1002 04:19:36.056065   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.23 HWAddress:2e:f4:d7:73:da:57 ID:1,2e:f4:d7:73:da:57 Lease:0x651beac6}
	I1002 04:19:36.056073   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.22 HWAddress:e2:e6:83:39:ae:b1 ID:1,e2:e6:83:39:ae:b1 Lease:0x651beaa8}
	I1002 04:19:36.056081   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.21 HWAddress:ce:bf:5f:b1:ac:25 ID:1,ce:bf:5f:b1:ac:25 Lease:0x651bea97}
	I1002 04:19:36.056091   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.20 HWAddress:ca:d:2a:ac:b1:6 ID:1,ca:d:2a:ac:b1:6 Lease:0x651bea88}
	I1002 04:19:36.056099   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.19 HWAddress:52:f5:58:b1:ed:72 ID:1,52:f5:58:b1:ed:72 Lease:0x651bea3a}
	I1002 04:19:36.056108   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.18 HWAddress:12:d5:a9:d3:2d:62 ID:1,12:d5:a9:d3:2d:62 Lease:0x651bea2e}
	I1002 04:19:36.056115   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.17 HWAddress:8e:72:e7:d4:b0:8b ID:1,8e:72:e7:d4:b0:8b Lease:0x651a98a3}
	I1002 04:19:36.056125   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.16 HWAddress:f6:1c:a1:3f:3a:af ID:1,f6:1c:a1:3f:3a:af Lease:0x651be9e8}
	I1002 04:19:36.056132   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.15 HWAddress:c6:8:4d:2b:4b:5d ID:1,c6:8:4d:2b:4b:5d Lease:0x651a987d}
	I1002 04:19:36.056141   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.14 HWAddress:22:b5:88:68:b3:50 ID:1,22:b5:88:68:b3:50 Lease:0x651be97d}
	I1002 04:19:36.056148   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.13 HWAddress:92:79:a6:ba:9b:af ID:1,92:79:a6:ba:9b:af Lease:0x651be99e}
	I1002 04:19:36.056157   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.12 HWAddress:26:7c:c7:f5:d5:85 ID:1,26:7c:c7:f5:d5:85 Lease:0x651a97f2}
	I1002 04:19:36.056165   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.11 HWAddress:da:69:84:ff:8a:c9 ID:1,da:69:84:ff:8a:c9 Lease:0x651be87c}
	I1002 04:19:36.056173   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.10 HWAddress:ee:b0:43:fa:b6:b5 ID:1,ee:b0:43:fa:b6:b5 Lease:0x651be85c}
	I1002 04:19:36.056181   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.9 HWAddress:b6:64:53:57:2a:86 ID:1,b6:64:53:57:2a:86 Lease:0x651be843}
	I1002 04:19:36.056191   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.8 HWAddress:2a:f4:7:2c:43:de ID:1,2a:f4:7:2c:43:de Lease:0x651be835}
	I1002 04:19:36.056199   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.7 HWAddress:8e:a8:11:c9:a1:e5 ID:1,8e:a8:11:c9:a1:e5 Lease:0x651be820}
	I1002 04:19:36.056209   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.6 HWAddress:96:80:f7:c:df:d8 ID:1,96:80:f7:c:df:d8 Lease:0x651a9696}
	I1002 04:19:36.056216   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.5 HWAddress:16:a0:fc:26:e:40 ID:1,16:a0:fc:26:e:40 Lease:0x651be7e5}
	I1002 04:19:36.056225   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.4 HWAddress:ae:5d:4a:2f:b:74 ID:1,ae:5d:4a:2f:b:74 Lease:0x651be77b}
	I1002 04:19:36.056232   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.3 HWAddress:ae:e6:9d:b3:23:84 ID:1,ae:e6:9d:b3:23:84 Lease:0x651be710}
	I1002 04:19:36.056241   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.2 HWAddress:72:88:6:ff:96:d3 ID:1,72:88:6:ff:96:d3 Lease:0x651be6d8}
	I1002 04:19:36.056249   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.69.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be649}
	I1002 04:19:36.056257   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.15 HWAddress:22:ed:a8:a4:a2:69 ID:1,22:ed:a8:a4:a2:69 Lease:0x651be62b}
	I1002 04:19:36.056265   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.14 HWAddress:1e:b6:d4:aa:d5:7 ID:1,1e:b6:d4:aa:d5:7 Lease:0x651a9440}
	I1002 04:19:36.056276   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.13 HWAddress:5a:91:c6:5:e0:24 ID:1,5a:91:c6:5:e0:24 Lease:0x651be60d}
	I1002 04:19:36.056284   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.12 HWAddress:f6:6d:98:92:1:a9 ID:1,f6:6d:98:92:1:a9 Lease:0x651be5da}
	I1002 04:19:36.056292   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.11 HWAddress:7e:eb:66:8f:41:b3 ID:1,7e:eb:66:8f:41:b3 Lease:0x651a930e}
	I1002 04:19:36.056304   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.10 HWAddress:c2:39:e0:92:22:c6 ID:1,c2:39:e0:92:22:c6 Lease:0x651a92e0}
	I1002 04:19:36.056313   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.9 HWAddress:9a:6:5a:80:5e:aa ID:1,9a:6:5a:80:5e:aa Lease:0x651be419}
	I1002 04:19:36.056320   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.8 HWAddress:e2:fd:a3:90:3:c1 ID:1,e2:fd:a3:90:3:c1 Lease:0x651be3f2}
	I1002 04:19:36.056332   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.7 HWAddress:f6:c5:4d:b6:2d:eb ID:1,f6:c5:4d:b6:2d:eb Lease:0x651be38e}
	I1002 04:19:36.056342   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.6 HWAddress:a:a6:e5:5f:7e:77 ID:1,a:a6:e5:5f:7e:77 Lease:0x651be31e}
	I1002 04:19:36.056350   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.5 HWAddress:b6:3d:e6:50:d:a4 ID:1,b6:3d:e6:50:d:a4 Lease:0x651be2ee}
	I1002 04:19:36.056357   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.4 HWAddress:ee:d4:6c:2:6f:f5 ID:1,ee:d4:6c:2:6f:f5 Lease:0x651be211}
	I1002 04:19:36.056372   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.3 HWAddress:f6:86:f1:2b:db:97 ID:1,f6:86:f1:2b:db:97 Lease:0x651a9086}
	I1002 04:19:36.056383   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.2 HWAddress:f2:d2:31:bc:71:a1 ID:1,f2:d2:31:bc:71:a1 Lease:0x651be0ee}
	I1002 04:19:36.056392   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.67.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be0b9}
	I1002 04:19:38.056297   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 2
	I1002 04:19:38.056312   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:38.056361   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:38.057528   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:38.057708   15148 main.go:141] libmachine: (auto-766000) DBG | Found 73 entries in /var/db/dhcpd_leases!
	I1002 04:19:38.057721   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.58 HWAddress:42:37:9a:9a:81:4e ID:1,42:37:9a:9a:81:4e Lease:0x651aa748}
	I1002 04:19:38.057740   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.57 HWAddress:8e:d3:8c:17:c0:7d ID:1,8e:d3:8c:17:c0:7d Lease:0x651aa722}
	I1002 04:19:38.057751   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.56 HWAddress:d6:30:42:4e:d1:bb ID:1,d6:30:42:4e:d1:bb Lease:0x651bf86c}
	I1002 04:19:38.057779   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf860}
	I1002 04:19:38.057795   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651aa6e0}
	I1002 04:19:38.057812   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.53 HWAddress:f2:40:42:2a:9e:b9 ID:1,f2:40:42:2a:9e:b9 Lease:0x651bf741}
	I1002 04:19:38.057824   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.52 HWAddress:1e:1e:85:26:e8:d1 ID:1,1e:1e:85:26:e8:d1 Lease:0x651aa5ac}
	I1002 04:19:38.057833   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.51 HWAddress:82:31:7f:2c:92:61 ID:1,82:31:7f:2c:92:61 Lease:0x651bf703}
	I1002 04:19:38.057840   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.50 HWAddress:92:85:3:c0:9b:8b ID:1,92:85:3:c0:9b:8b Lease:0x651bf6e8}
	I1002 04:19:38.057852   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.49 HWAddress:a6:eb:1:c3:3f:33 ID:1,a6:eb:1:c3:3f:33 Lease:0x651aa578}
	I1002 04:19:38.057860   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.48 HWAddress:b6:d4:ee:80:e3:7c ID:1,b6:d4:ee:80:e3:7c Lease:0x651bf6b7}
	I1002 04:19:38.057867   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.47 HWAddress:3a:ed:29:91:4a:d2 ID:1,3a:ed:29:91:4a:d2 Lease:0x651bf6a2}
	I1002 04:19:38.057873   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.46 HWAddress:ba:a0:4a:72:ba:62 ID:1,ba:a0:4a:72:ba:62 Lease:0x651bf636}
	I1002 04:19:38.057881   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.45 HWAddress:f2:10:90:e6:b6:f7 ID:1,f2:10:90:e6:b6:f7 Lease:0x651bf5ca}
	I1002 04:19:38.057891   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.44 HWAddress:ea:4c:aa:8:e4:9e ID:1,ea:4c:aa:8:e4:9e Lease:0x651bf57b}
	I1002 04:19:38.057903   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.43 HWAddress:e:6a:a3:fe:d2:cb ID:1,e:6a:a3:fe:d2:cb Lease:0x651aa394}
	I1002 04:19:38.057913   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.42 HWAddress:d6:a7:4a:88:4e:ce ID:1,d6:a7:4a:88:4e:ce Lease:0x651aa2da}
	I1002 04:19:38.057930   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.41 HWAddress:42:af:87:39:6e:40 ID:1,42:af:87:39:6e:40 Lease:0x651bf4c2}
	I1002 04:19:38.057939   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.40 HWAddress:be:0:2f:ae:61:a6 ID:1,be:0:2f:ae:61:a6 Lease:0x651bf475}
	I1002 04:19:38.057992   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.39 HWAddress:fa:4e:36:7c:45:59 ID:1,fa:4e:36:7c:45:59 Lease:0x651aa173}
	I1002 04:19:38.058009   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.38 HWAddress:be:31:47:3d:af:ca ID:1,be:31:47:3d:af:ca Lease:0x651aa15d}
	I1002 04:19:38.058032   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.37 HWAddress:ce:fe:4c:b:20:0 ID:1,ce:fe:4c:b:20:0 Lease:0x651bf2b0}
	I1002 04:19:38.058062   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.36 HWAddress:f6:d3:d8:c5:b:4 ID:1,f6:d3:d8:c5:b:4 Lease:0x651bf273}
	I1002 04:19:38.058069   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.35 HWAddress:e2:9b:39:3b:b6:81 ID:1,e2:9b:39:3b:b6:81 Lease:0x651bf1d3}
	I1002 04:19:38.058077   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.34 HWAddress:f2:d6:6e:8c:56:79 ID:1,f2:d6:6e:8c:56:79 Lease:0x651bf1bb}
	I1002 04:19:38.058087   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.33 HWAddress:3a:12:4c:79:5d:43 ID:1,3a:12:4c:79:5d:43 Lease:0x651bf0d4}
	I1002 04:19:38.058096   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.32 HWAddress:de:c1:60:39:14:91 ID:1,de:c1:60:39:14:91 Lease:0x651a9f49}
	I1002 04:19:38.058104   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.31 HWAddress:26:f:15:87:ad:4e ID:1,26:f:15:87:ad:4e Lease:0x651befb8}
	I1002 04:19:38.058110   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.30 HWAddress:6a:0:7f:10:d4:d9 ID:1,6a:0:7f:10:d4:d9 Lease:0x651beeba}
	I1002 04:19:38.058135   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.29 HWAddress:52:d3:be:bc:4f:c2 ID:1,52:d3:be:bc:4f:c2 Lease:0x651bee49}
	I1002 04:19:38.058164   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.28 HWAddress:3a:e8:1f:a6:a4:63 ID:1,3a:e8:1f:a6:a4:63 Lease:0x651bed41}
	I1002 04:19:38.058171   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.27 HWAddress:d2:4e:a:29:75:a7 ID:1,d2:4e:a:29:75:a7 Lease:0x651bebc0}
	I1002 04:19:38.058178   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.26 HWAddress:2a:21:83:2d:61:52 ID:1,2a:21:83:2d:61:52 Lease:0x651bec15}
	I1002 04:19:38.058193   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.25 HWAddress:8a:7d:ad:ea:52:8f ID:1,8a:7d:ad:ea:52:8f Lease:0x651beb1d}
	I1002 04:19:38.058221   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.24 HWAddress:7a:91:f7:be:fd:e3 ID:1,7a:91:f7:be:fd:e3 Lease:0x651beb01}
	I1002 04:19:38.058229   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.23 HWAddress:2e:f4:d7:73:da:57 ID:1,2e:f4:d7:73:da:57 Lease:0x651beac6}
	I1002 04:19:38.058257   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.22 HWAddress:e2:e6:83:39:ae:b1 ID:1,e2:e6:83:39:ae:b1 Lease:0x651beaa8}
	I1002 04:19:38.058265   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.21 HWAddress:ce:bf:5f:b1:ac:25 ID:1,ce:bf:5f:b1:ac:25 Lease:0x651bea97}
	I1002 04:19:38.058273   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.20 HWAddress:ca:d:2a:ac:b1:6 ID:1,ca:d:2a:ac:b1:6 Lease:0x651bea88}
	I1002 04:19:38.058329   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.19 HWAddress:52:f5:58:b1:ed:72 ID:1,52:f5:58:b1:ed:72 Lease:0x651bea3a}
	I1002 04:19:38.058339   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.18 HWAddress:12:d5:a9:d3:2d:62 ID:1,12:d5:a9:d3:2d:62 Lease:0x651bea2e}
	I1002 04:19:38.058363   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.17 HWAddress:8e:72:e7:d4:b0:8b ID:1,8e:72:e7:d4:b0:8b Lease:0x651a98a3}
	I1002 04:19:38.058374   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.16 HWAddress:f6:1c:a1:3f:3a:af ID:1,f6:1c:a1:3f:3a:af Lease:0x651be9e8}
	I1002 04:19:38.058382   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.15 HWAddress:c6:8:4d:2b:4b:5d ID:1,c6:8:4d:2b:4b:5d Lease:0x651a987d}
	I1002 04:19:38.058392   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.14 HWAddress:22:b5:88:68:b3:50 ID:1,22:b5:88:68:b3:50 Lease:0x651be97d}
	I1002 04:19:38.058399   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.13 HWAddress:92:79:a6:ba:9b:af ID:1,92:79:a6:ba:9b:af Lease:0x651be99e}
	I1002 04:19:38.058407   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.12 HWAddress:26:7c:c7:f5:d5:85 ID:1,26:7c:c7:f5:d5:85 Lease:0x651a97f2}
	I1002 04:19:38.058415   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.11 HWAddress:da:69:84:ff:8a:c9 ID:1,da:69:84:ff:8a:c9 Lease:0x651be87c}
	I1002 04:19:38.058424   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.10 HWAddress:ee:b0:43:fa:b6:b5 ID:1,ee:b0:43:fa:b6:b5 Lease:0x651be85c}
	I1002 04:19:38.058439   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.9 HWAddress:b6:64:53:57:2a:86 ID:1,b6:64:53:57:2a:86 Lease:0x651be843}
	I1002 04:19:38.058448   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.8 HWAddress:2a:f4:7:2c:43:de ID:1,2a:f4:7:2c:43:de Lease:0x651be835}
	I1002 04:19:38.058456   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.7 HWAddress:8e:a8:11:c9:a1:e5 ID:1,8e:a8:11:c9:a1:e5 Lease:0x651be820}
	I1002 04:19:38.058463   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.6 HWAddress:96:80:f7:c:df:d8 ID:1,96:80:f7:c:df:d8 Lease:0x651a9696}
	I1002 04:19:38.058471   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.5 HWAddress:16:a0:fc:26:e:40 ID:1,16:a0:fc:26:e:40 Lease:0x651be7e5}
	I1002 04:19:38.058477   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.4 HWAddress:ae:5d:4a:2f:b:74 ID:1,ae:5d:4a:2f:b:74 Lease:0x651be77b}
	I1002 04:19:38.058485   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.3 HWAddress:ae:e6:9d:b3:23:84 ID:1,ae:e6:9d:b3:23:84 Lease:0x651be710}
	I1002 04:19:38.058493   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.2 HWAddress:72:88:6:ff:96:d3 ID:1,72:88:6:ff:96:d3 Lease:0x651be6d8}
	I1002 04:19:38.058504   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.69.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be649}
	I1002 04:19:38.058512   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.15 HWAddress:22:ed:a8:a4:a2:69 ID:1,22:ed:a8:a4:a2:69 Lease:0x651be62b}
	I1002 04:19:38.058540   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.14 HWAddress:1e:b6:d4:aa:d5:7 ID:1,1e:b6:d4:aa:d5:7 Lease:0x651a9440}
	I1002 04:19:38.058572   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.13 HWAddress:5a:91:c6:5:e0:24 ID:1,5a:91:c6:5:e0:24 Lease:0x651be60d}
	I1002 04:19:38.058590   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.12 HWAddress:f6:6d:98:92:1:a9 ID:1,f6:6d:98:92:1:a9 Lease:0x651be5da}
	I1002 04:19:38.058602   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.11 HWAddress:7e:eb:66:8f:41:b3 ID:1,7e:eb:66:8f:41:b3 Lease:0x651a930e}
	I1002 04:19:38.058611   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.10 HWAddress:c2:39:e0:92:22:c6 ID:1,c2:39:e0:92:22:c6 Lease:0x651a92e0}
	I1002 04:19:38.058635   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.9 HWAddress:9a:6:5a:80:5e:aa ID:1,9a:6:5a:80:5e:aa Lease:0x651be419}
	I1002 04:19:38.058664   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.8 HWAddress:e2:fd:a3:90:3:c1 ID:1,e2:fd:a3:90:3:c1 Lease:0x651be3f2}
	I1002 04:19:38.058679   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.7 HWAddress:f6:c5:4d:b6:2d:eb ID:1,f6:c5:4d:b6:2d:eb Lease:0x651be38e}
	I1002 04:19:38.058688   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.6 HWAddress:a:a6:e5:5f:7e:77 ID:1,a:a6:e5:5f:7e:77 Lease:0x651be31e}
	I1002 04:19:38.058712   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.5 HWAddress:b6:3d:e6:50:d:a4 ID:1,b6:3d:e6:50:d:a4 Lease:0x651be2ee}
	I1002 04:19:38.058719   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.4 HWAddress:ee:d4:6c:2:6f:f5 ID:1,ee:d4:6c:2:6f:f5 Lease:0x651be211}
	I1002 04:19:38.058750   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.3 HWAddress:f6:86:f1:2b:db:97 ID:1,f6:86:f1:2b:db:97 Lease:0x651a9086}
	I1002 04:19:38.058756   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.2 HWAddress:f2:d2:31:bc:71:a1 ID:1,f2:d2:31:bc:71:a1 Lease:0x651be0ee}
	I1002 04:19:38.058776   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.67.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be0b9}
	I1002 04:19:39.511226   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1002 04:19:39.511313   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1002 04:19:39.511323   15148 main.go:141] libmachine: (auto-766000) DBG | 2023/10/02 04:19:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1002 04:19:40.059302   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 3
	I1002 04:19:40.059317   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:40.059412   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:40.060328   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:40.060446   15148 main.go:141] libmachine: (auto-766000) DBG | Found 73 entries in /var/db/dhcpd_leases!
	I1002 04:19:40.060456   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.58 HWAddress:42:37:9a:9a:81:4e ID:1,42:37:9a:9a:81:4e Lease:0x651aa748}
	I1002 04:19:40.060465   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.57 HWAddress:8e:d3:8c:17:c0:7d ID:1,8e:d3:8c:17:c0:7d Lease:0x651aa722}
	I1002 04:19:40.060476   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.56 HWAddress:d6:30:42:4e:d1:bb ID:1,d6:30:42:4e:d1:bb Lease:0x651bf86c}
	I1002 04:19:40.060491   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf860}
	I1002 04:19:40.060499   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651aa6e0}
	I1002 04:19:40.060509   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.53 HWAddress:f2:40:42:2a:9e:b9 ID:1,f2:40:42:2a:9e:b9 Lease:0x651bf741}
	I1002 04:19:40.060518   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.52 HWAddress:1e:1e:85:26:e8:d1 ID:1,1e:1e:85:26:e8:d1 Lease:0x651aa5ac}
	I1002 04:19:40.060543   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.51 HWAddress:82:31:7f:2c:92:61 ID:1,82:31:7f:2c:92:61 Lease:0x651bf703}
	I1002 04:19:40.060554   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.50 HWAddress:92:85:3:c0:9b:8b ID:1,92:85:3:c0:9b:8b Lease:0x651bf6e8}
	I1002 04:19:40.060561   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.49 HWAddress:a6:eb:1:c3:3f:33 ID:1,a6:eb:1:c3:3f:33 Lease:0x651aa578}
	I1002 04:19:40.060569   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.48 HWAddress:b6:d4:ee:80:e3:7c ID:1,b6:d4:ee:80:e3:7c Lease:0x651bf6b7}
	I1002 04:19:40.060576   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.47 HWAddress:3a:ed:29:91:4a:d2 ID:1,3a:ed:29:91:4a:d2 Lease:0x651bf6a2}
	I1002 04:19:40.060584   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.46 HWAddress:ba:a0:4a:72:ba:62 ID:1,ba:a0:4a:72:ba:62 Lease:0x651bf636}
	I1002 04:19:40.060591   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.45 HWAddress:f2:10:90:e6:b6:f7 ID:1,f2:10:90:e6:b6:f7 Lease:0x651bf5ca}
	I1002 04:19:40.060599   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.44 HWAddress:ea:4c:aa:8:e4:9e ID:1,ea:4c:aa:8:e4:9e Lease:0x651bf57b}
	I1002 04:19:40.060606   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.43 HWAddress:e:6a:a3:fe:d2:cb ID:1,e:6a:a3:fe:d2:cb Lease:0x651aa394}
	I1002 04:19:40.060613   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.42 HWAddress:d6:a7:4a:88:4e:ce ID:1,d6:a7:4a:88:4e:ce Lease:0x651aa2da}
	I1002 04:19:40.060620   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.41 HWAddress:42:af:87:39:6e:40 ID:1,42:af:87:39:6e:40 Lease:0x651bf4c2}
	I1002 04:19:40.060627   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.40 HWAddress:be:0:2f:ae:61:a6 ID:1,be:0:2f:ae:61:a6 Lease:0x651bf475}
	I1002 04:19:40.060640   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.39 HWAddress:fa:4e:36:7c:45:59 ID:1,fa:4e:36:7c:45:59 Lease:0x651aa173}
	I1002 04:19:40.060647   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.38 HWAddress:be:31:47:3d:af:ca ID:1,be:31:47:3d:af:ca Lease:0x651aa15d}
	I1002 04:19:40.060654   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.37 HWAddress:ce:fe:4c:b:20:0 ID:1,ce:fe:4c:b:20:0 Lease:0x651bf2b0}
	I1002 04:19:40.060661   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.36 HWAddress:f6:d3:d8:c5:b:4 ID:1,f6:d3:d8:c5:b:4 Lease:0x651bf273}
	I1002 04:19:40.060668   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.35 HWAddress:e2:9b:39:3b:b6:81 ID:1,e2:9b:39:3b:b6:81 Lease:0x651bf1d3}
	I1002 04:19:40.060676   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.34 HWAddress:f2:d6:6e:8c:56:79 ID:1,f2:d6:6e:8c:56:79 Lease:0x651bf1bb}
	I1002 04:19:40.060683   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.33 HWAddress:3a:12:4c:79:5d:43 ID:1,3a:12:4c:79:5d:43 Lease:0x651bf0d4}
	I1002 04:19:40.060691   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.32 HWAddress:de:c1:60:39:14:91 ID:1,de:c1:60:39:14:91 Lease:0x651a9f49}
	I1002 04:19:40.060698   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.31 HWAddress:26:f:15:87:ad:4e ID:1,26:f:15:87:ad:4e Lease:0x651befb8}
	I1002 04:19:40.060708   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.30 HWAddress:6a:0:7f:10:d4:d9 ID:1,6a:0:7f:10:d4:d9 Lease:0x651beeba}
	I1002 04:19:40.060716   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.29 HWAddress:52:d3:be:bc:4f:c2 ID:1,52:d3:be:bc:4f:c2 Lease:0x651bee49}
	I1002 04:19:40.060724   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.28 HWAddress:3a:e8:1f:a6:a4:63 ID:1,3a:e8:1f:a6:a4:63 Lease:0x651bed41}
	I1002 04:19:40.060744   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.27 HWAddress:d2:4e:a:29:75:a7 ID:1,d2:4e:a:29:75:a7 Lease:0x651bebc0}
	I1002 04:19:40.060757   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.26 HWAddress:2a:21:83:2d:61:52 ID:1,2a:21:83:2d:61:52 Lease:0x651bec15}
	I1002 04:19:40.060766   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.25 HWAddress:8a:7d:ad:ea:52:8f ID:1,8a:7d:ad:ea:52:8f Lease:0x651beb1d}
	I1002 04:19:40.060775   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.24 HWAddress:7a:91:f7:be:fd:e3 ID:1,7a:91:f7:be:fd:e3 Lease:0x651beb01}
	I1002 04:19:40.060789   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.23 HWAddress:2e:f4:d7:73:da:57 ID:1,2e:f4:d7:73:da:57 Lease:0x651beac6}
	I1002 04:19:40.060803   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.22 HWAddress:e2:e6:83:39:ae:b1 ID:1,e2:e6:83:39:ae:b1 Lease:0x651beaa8}
	I1002 04:19:40.060812   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.21 HWAddress:ce:bf:5f:b1:ac:25 ID:1,ce:bf:5f:b1:ac:25 Lease:0x651bea97}
	I1002 04:19:40.060819   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.20 HWAddress:ca:d:2a:ac:b1:6 ID:1,ca:d:2a:ac:b1:6 Lease:0x651bea88}
	I1002 04:19:40.060827   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.19 HWAddress:52:f5:58:b1:ed:72 ID:1,52:f5:58:b1:ed:72 Lease:0x651bea3a}
	I1002 04:19:40.060835   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.18 HWAddress:12:d5:a9:d3:2d:62 ID:1,12:d5:a9:d3:2d:62 Lease:0x651bea2e}
	I1002 04:19:40.060850   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.17 HWAddress:8e:72:e7:d4:b0:8b ID:1,8e:72:e7:d4:b0:8b Lease:0x651a98a3}
	I1002 04:19:40.060869   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.16 HWAddress:f6:1c:a1:3f:3a:af ID:1,f6:1c:a1:3f:3a:af Lease:0x651be9e8}
	I1002 04:19:40.060880   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.15 HWAddress:c6:8:4d:2b:4b:5d ID:1,c6:8:4d:2b:4b:5d Lease:0x651a987d}
	I1002 04:19:40.060889   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.14 HWAddress:22:b5:88:68:b3:50 ID:1,22:b5:88:68:b3:50 Lease:0x651be97d}
	I1002 04:19:40.060897   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.13 HWAddress:92:79:a6:ba:9b:af ID:1,92:79:a6:ba:9b:af Lease:0x651be99e}
	I1002 04:19:40.060905   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.12 HWAddress:26:7c:c7:f5:d5:85 ID:1,26:7c:c7:f5:d5:85 Lease:0x651a97f2}
	I1002 04:19:40.060921   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.11 HWAddress:da:69:84:ff:8a:c9 ID:1,da:69:84:ff:8a:c9 Lease:0x651be87c}
	I1002 04:19:40.060935   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.10 HWAddress:ee:b0:43:fa:b6:b5 ID:1,ee:b0:43:fa:b6:b5 Lease:0x651be85c}
	I1002 04:19:40.060943   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.9 HWAddress:b6:64:53:57:2a:86 ID:1,b6:64:53:57:2a:86 Lease:0x651be843}
	I1002 04:19:40.060952   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.8 HWAddress:2a:f4:7:2c:43:de ID:1,2a:f4:7:2c:43:de Lease:0x651be835}
	I1002 04:19:40.060960   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.7 HWAddress:8e:a8:11:c9:a1:e5 ID:1,8e:a8:11:c9:a1:e5 Lease:0x651be820}
	I1002 04:19:40.060967   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.6 HWAddress:96:80:f7:c:df:d8 ID:1,96:80:f7:c:df:d8 Lease:0x651a9696}
	I1002 04:19:40.060975   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.5 HWAddress:16:a0:fc:26:e:40 ID:1,16:a0:fc:26:e:40 Lease:0x651be7e5}
	I1002 04:19:40.060988   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.4 HWAddress:ae:5d:4a:2f:b:74 ID:1,ae:5d:4a:2f:b:74 Lease:0x651be77b}
	I1002 04:19:40.060998   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.3 HWAddress:ae:e6:9d:b3:23:84 ID:1,ae:e6:9d:b3:23:84 Lease:0x651be710}
	I1002 04:19:40.061005   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.2 HWAddress:72:88:6:ff:96:d3 ID:1,72:88:6:ff:96:d3 Lease:0x651be6d8}
	I1002 04:19:40.061014   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.69.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be649}
	I1002 04:19:40.061033   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.15 HWAddress:22:ed:a8:a4:a2:69 ID:1,22:ed:a8:a4:a2:69 Lease:0x651be62b}
	I1002 04:19:40.061044   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.14 HWAddress:1e:b6:d4:aa:d5:7 ID:1,1e:b6:d4:aa:d5:7 Lease:0x651a9440}
	I1002 04:19:40.061056   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.13 HWAddress:5a:91:c6:5:e0:24 ID:1,5a:91:c6:5:e0:24 Lease:0x651be60d}
	I1002 04:19:40.061065   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.12 HWAddress:f6:6d:98:92:1:a9 ID:1,f6:6d:98:92:1:a9 Lease:0x651be5da}
	I1002 04:19:40.061072   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.11 HWAddress:7e:eb:66:8f:41:b3 ID:1,7e:eb:66:8f:41:b3 Lease:0x651a930e}
	I1002 04:19:40.061081   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.10 HWAddress:c2:39:e0:92:22:c6 ID:1,c2:39:e0:92:22:c6 Lease:0x651a92e0}
	I1002 04:19:40.061089   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.9 HWAddress:9a:6:5a:80:5e:aa ID:1,9a:6:5a:80:5e:aa Lease:0x651be419}
	I1002 04:19:40.061102   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.8 HWAddress:e2:fd:a3:90:3:c1 ID:1,e2:fd:a3:90:3:c1 Lease:0x651be3f2}
	I1002 04:19:40.061111   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.7 HWAddress:f6:c5:4d:b6:2d:eb ID:1,f6:c5:4d:b6:2d:eb Lease:0x651be38e}
	I1002 04:19:40.061120   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.6 HWAddress:a:a6:e5:5f:7e:77 ID:1,a:a6:e5:5f:7e:77 Lease:0x651be31e}
	I1002 04:19:40.061128   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.5 HWAddress:b6:3d:e6:50:d:a4 ID:1,b6:3d:e6:50:d:a4 Lease:0x651be2ee}
	I1002 04:19:40.061136   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.4 HWAddress:ee:d4:6c:2:6f:f5 ID:1,ee:d4:6c:2:6f:f5 Lease:0x651be211}
	I1002 04:19:40.061143   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.3 HWAddress:f6:86:f1:2b:db:97 ID:1,f6:86:f1:2b:db:97 Lease:0x651a9086}
	I1002 04:19:40.061152   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.2 HWAddress:f2:d2:31:bc:71:a1 ID:1,f2:d2:31:bc:71:a1 Lease:0x651be0ee}
	I1002 04:19:40.061160   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.67.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be0b9}
	I1002 04:19:42.060994   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 4
	I1002 04:19:42.061010   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:42.061102   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:42.062192   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:42.062350   15148 main.go:141] libmachine: (auto-766000) DBG | Found 73 entries in /var/db/dhcpd_leases!
	I1002 04:19:42.062361   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.58 HWAddress:42:37:9a:9a:81:4e ID:1,42:37:9a:9a:81:4e Lease:0x651aa748}
	I1002 04:19:42.062369   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.57 HWAddress:8e:d3:8c:17:c0:7d ID:1,8e:d3:8c:17:c0:7d Lease:0x651aa722}
	I1002 04:19:42.062376   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.56 HWAddress:d6:30:42:4e:d1:bb ID:1,d6:30:42:4e:d1:bb Lease:0x651bf86c}
	I1002 04:19:42.062385   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.55 HWAddress:e6:3e:57:75:71:61 ID:1,e6:3e:57:75:71:61 Lease:0x651bf860}
	I1002 04:19:42.062392   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.54 HWAddress:ee:38:8e:f1:af:fd ID:1,ee:38:8e:f1:af:fd Lease:0x651aa6e0}
	I1002 04:19:42.062400   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.53 HWAddress:f2:40:42:2a:9e:b9 ID:1,f2:40:42:2a:9e:b9 Lease:0x651bf741}
	I1002 04:19:42.062407   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.52 HWAddress:1e:1e:85:26:e8:d1 ID:1,1e:1e:85:26:e8:d1 Lease:0x651aa5ac}
	I1002 04:19:42.062417   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.51 HWAddress:82:31:7f:2c:92:61 ID:1,82:31:7f:2c:92:61 Lease:0x651bf703}
	I1002 04:19:42.062424   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.50 HWAddress:92:85:3:c0:9b:8b ID:1,92:85:3:c0:9b:8b Lease:0x651bf6e8}
	I1002 04:19:42.062440   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.49 HWAddress:a6:eb:1:c3:3f:33 ID:1,a6:eb:1:c3:3f:33 Lease:0x651aa578}
	I1002 04:19:42.062447   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.48 HWAddress:b6:d4:ee:80:e3:7c ID:1,b6:d4:ee:80:e3:7c Lease:0x651bf6b7}
	I1002 04:19:42.062455   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.47 HWAddress:3a:ed:29:91:4a:d2 ID:1,3a:ed:29:91:4a:d2 Lease:0x651bf6a2}
	I1002 04:19:42.062464   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.46 HWAddress:ba:a0:4a:72:ba:62 ID:1,ba:a0:4a:72:ba:62 Lease:0x651bf636}
	I1002 04:19:42.062471   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.45 HWAddress:f2:10:90:e6:b6:f7 ID:1,f2:10:90:e6:b6:f7 Lease:0x651bf5ca}
	I1002 04:19:42.062491   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.44 HWAddress:ea:4c:aa:8:e4:9e ID:1,ea:4c:aa:8:e4:9e Lease:0x651bf57b}
	I1002 04:19:42.062518   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.43 HWAddress:e:6a:a3:fe:d2:cb ID:1,e:6a:a3:fe:d2:cb Lease:0x651aa394}
	I1002 04:19:42.062568   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.42 HWAddress:d6:a7:4a:88:4e:ce ID:1,d6:a7:4a:88:4e:ce Lease:0x651aa2da}
	I1002 04:19:42.062581   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.41 HWAddress:42:af:87:39:6e:40 ID:1,42:af:87:39:6e:40 Lease:0x651bf4c2}
	I1002 04:19:42.062593   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.40 HWAddress:be:0:2f:ae:61:a6 ID:1,be:0:2f:ae:61:a6 Lease:0x651bf475}
	I1002 04:19:42.062602   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.39 HWAddress:fa:4e:36:7c:45:59 ID:1,fa:4e:36:7c:45:59 Lease:0x651aa173}
	I1002 04:19:42.062610   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.38 HWAddress:be:31:47:3d:af:ca ID:1,be:31:47:3d:af:ca Lease:0x651aa15d}
	I1002 04:19:42.062621   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.37 HWAddress:ce:fe:4c:b:20:0 ID:1,ce:fe:4c:b:20:0 Lease:0x651bf2b0}
	I1002 04:19:42.062629   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.36 HWAddress:f6:d3:d8:c5:b:4 ID:1,f6:d3:d8:c5:b:4 Lease:0x651bf273}
	I1002 04:19:42.062638   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.35 HWAddress:e2:9b:39:3b:b6:81 ID:1,e2:9b:39:3b:b6:81 Lease:0x651bf1d3}
	I1002 04:19:42.062645   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.34 HWAddress:f2:d6:6e:8c:56:79 ID:1,f2:d6:6e:8c:56:79 Lease:0x651bf1bb}
	I1002 04:19:42.062668   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.33 HWAddress:3a:12:4c:79:5d:43 ID:1,3a:12:4c:79:5d:43 Lease:0x651bf0d4}
	I1002 04:19:42.062700   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.32 HWAddress:de:c1:60:39:14:91 ID:1,de:c1:60:39:14:91 Lease:0x651a9f49}
	I1002 04:19:42.062709   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.31 HWAddress:26:f:15:87:ad:4e ID:1,26:f:15:87:ad:4e Lease:0x651befb8}
	I1002 04:19:42.062731   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.30 HWAddress:6a:0:7f:10:d4:d9 ID:1,6a:0:7f:10:d4:d9 Lease:0x651beeba}
	I1002 04:19:42.062768   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.29 HWAddress:52:d3:be:bc:4f:c2 ID:1,52:d3:be:bc:4f:c2 Lease:0x651bee49}
	I1002 04:19:42.062788   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.28 HWAddress:3a:e8:1f:a6:a4:63 ID:1,3a:e8:1f:a6:a4:63 Lease:0x651bed41}
	I1002 04:19:42.062817   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.27 HWAddress:d2:4e:a:29:75:a7 ID:1,d2:4e:a:29:75:a7 Lease:0x651bebc0}
	I1002 04:19:42.062848   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.26 HWAddress:2a:21:83:2d:61:52 ID:1,2a:21:83:2d:61:52 Lease:0x651bec15}
	I1002 04:19:42.062855   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.25 HWAddress:8a:7d:ad:ea:52:8f ID:1,8a:7d:ad:ea:52:8f Lease:0x651beb1d}
	I1002 04:19:42.062906   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.24 HWAddress:7a:91:f7:be:fd:e3 ID:1,7a:91:f7:be:fd:e3 Lease:0x651beb01}
	I1002 04:19:42.062932   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.23 HWAddress:2e:f4:d7:73:da:57 ID:1,2e:f4:d7:73:da:57 Lease:0x651beac6}
	I1002 04:19:42.062980   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.22 HWAddress:e2:e6:83:39:ae:b1 ID:1,e2:e6:83:39:ae:b1 Lease:0x651beaa8}
	I1002 04:19:42.062993   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.21 HWAddress:ce:bf:5f:b1:ac:25 ID:1,ce:bf:5f:b1:ac:25 Lease:0x651bea97}
	I1002 04:19:42.063016   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.20 HWAddress:ca:d:2a:ac:b1:6 ID:1,ca:d:2a:ac:b1:6 Lease:0x651bea88}
	I1002 04:19:42.063050   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.19 HWAddress:52:f5:58:b1:ed:72 ID:1,52:f5:58:b1:ed:72 Lease:0x651bea3a}
	I1002 04:19:42.063067   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.18 HWAddress:12:d5:a9:d3:2d:62 ID:1,12:d5:a9:d3:2d:62 Lease:0x651bea2e}
	I1002 04:19:42.063080   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.17 HWAddress:8e:72:e7:d4:b0:8b ID:1,8e:72:e7:d4:b0:8b Lease:0x651a98a3}
	I1002 04:19:42.063109   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.16 HWAddress:f6:1c:a1:3f:3a:af ID:1,f6:1c:a1:3f:3a:af Lease:0x651be9e8}
	I1002 04:19:42.063140   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.15 HWAddress:c6:8:4d:2b:4b:5d ID:1,c6:8:4d:2b:4b:5d Lease:0x651a987d}
	I1002 04:19:42.063148   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.14 HWAddress:22:b5:88:68:b3:50 ID:1,22:b5:88:68:b3:50 Lease:0x651be97d}
	I1002 04:19:42.063157   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.13 HWAddress:92:79:a6:ba:9b:af ID:1,92:79:a6:ba:9b:af Lease:0x651be99e}
	I1002 04:19:42.063164   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.12 HWAddress:26:7c:c7:f5:d5:85 ID:1,26:7c:c7:f5:d5:85 Lease:0x651a97f2}
	I1002 04:19:42.063173   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.11 HWAddress:da:69:84:ff:8a:c9 ID:1,da:69:84:ff:8a:c9 Lease:0x651be87c}
	I1002 04:19:42.063181   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.10 HWAddress:ee:b0:43:fa:b6:b5 ID:1,ee:b0:43:fa:b6:b5 Lease:0x651be85c}
	I1002 04:19:42.063190   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.9 HWAddress:b6:64:53:57:2a:86 ID:1,b6:64:53:57:2a:86 Lease:0x651be843}
	I1002 04:19:42.063197   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.8 HWAddress:2a:f4:7:2c:43:de ID:1,2a:f4:7:2c:43:de Lease:0x651be835}
	I1002 04:19:42.063206   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.7 HWAddress:8e:a8:11:c9:a1:e5 ID:1,8e:a8:11:c9:a1:e5 Lease:0x651be820}
	I1002 04:19:42.063222   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.6 HWAddress:96:80:f7:c:df:d8 ID:1,96:80:f7:c:df:d8 Lease:0x651a9696}
	I1002 04:19:42.063245   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.5 HWAddress:16:a0:fc:26:e:40 ID:1,16:a0:fc:26:e:40 Lease:0x651be7e5}
	I1002 04:19:42.063273   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.4 HWAddress:ae:5d:4a:2f:b:74 ID:1,ae:5d:4a:2f:b:74 Lease:0x651be77b}
	I1002 04:19:42.063282   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.3 HWAddress:ae:e6:9d:b3:23:84 ID:1,ae:e6:9d:b3:23:84 Lease:0x651be710}
	I1002 04:19:42.063291   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.2 HWAddress:72:88:6:ff:96:d3 ID:1,72:88:6:ff:96:d3 Lease:0x651be6d8}
	I1002 04:19:42.063299   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.69.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be649}
	I1002 04:19:42.063311   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.15 HWAddress:22:ed:a8:a4:a2:69 ID:1,22:ed:a8:a4:a2:69 Lease:0x651be62b}
	I1002 04:19:42.063333   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.14 HWAddress:1e:b6:d4:aa:d5:7 ID:1,1e:b6:d4:aa:d5:7 Lease:0x651a9440}
	I1002 04:19:42.063360   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.13 HWAddress:5a:91:c6:5:e0:24 ID:1,5a:91:c6:5:e0:24 Lease:0x651be60d}
	I1002 04:19:42.063366   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.12 HWAddress:f6:6d:98:92:1:a9 ID:1,f6:6d:98:92:1:a9 Lease:0x651be5da}
	I1002 04:19:42.063374   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.11 HWAddress:7e:eb:66:8f:41:b3 ID:1,7e:eb:66:8f:41:b3 Lease:0x651a930e}
	I1002 04:19:42.063381   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.10 HWAddress:c2:39:e0:92:22:c6 ID:1,c2:39:e0:92:22:c6 Lease:0x651a92e0}
	I1002 04:19:42.063393   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.9 HWAddress:9a:6:5a:80:5e:aa ID:1,9a:6:5a:80:5e:aa Lease:0x651be419}
	I1002 04:19:42.063401   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.8 HWAddress:e2:fd:a3:90:3:c1 ID:1,e2:fd:a3:90:3:c1 Lease:0x651be3f2}
	I1002 04:19:42.063424   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.7 HWAddress:f6:c5:4d:b6:2d:eb ID:1,f6:c5:4d:b6:2d:eb Lease:0x651be38e}
	I1002 04:19:42.063458   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.6 HWAddress:a:a6:e5:5f:7e:77 ID:1,a:a6:e5:5f:7e:77 Lease:0x651be31e}
	I1002 04:19:42.063469   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.5 HWAddress:b6:3d:e6:50:d:a4 ID:1,b6:3d:e6:50:d:a4 Lease:0x651be2ee}
	I1002 04:19:42.063510   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.4 HWAddress:ee:d4:6c:2:6f:f5 ID:1,ee:d4:6c:2:6f:f5 Lease:0x651be211}
	I1002 04:19:42.063548   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.3 HWAddress:f6:86:f1:2b:db:97 ID:1,f6:86:f1:2b:db:97 Lease:0x651a9086}
	I1002 04:19:42.063558   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.68.2 HWAddress:f2:d2:31:bc:71:a1 ID:1,f2:d2:31:bc:71:a1 Lease:0x651be0ee}
	I1002 04:19:42.063567   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name: IPAddress:192.168.67.2 HWAddress:f2:21:c3:3b:c7:2c ID:1,f2:21:c3:3b:c7:2c Lease:0x651be0b9}
	I1002 04:19:44.064217   15148 main.go:141] libmachine: (auto-766000) DBG | Attempt 5
	I1002 04:19:44.064243   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:44.064416   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:44.066028   15148 main.go:141] libmachine: (auto-766000) DBG | Searching for ca:c3:3f:1c:55:a in /var/db/dhcpd_leases ...
	I1002 04:19:44.066267   15148 main.go:141] libmachine: (auto-766000) DBG | Found 74 entries in /var/db/dhcpd_leases!
	I1002 04:19:44.066282   15148 main.go:141] libmachine: (auto-766000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.59 HWAddress:ca:c3:3f:1c:55:a ID:1,ca:c3:3f:1c:55:a Lease:0x651bf8ce}
	I1002 04:19:44.066291   15148 main.go:141] libmachine: (auto-766000) DBG | Found match: ca:c3:3f:1c:55:a
	I1002 04:19:44.066298   15148 main.go:141] libmachine: (auto-766000) DBG | IP: 192.168.70.59
	I1002 04:19:44.066335   15148 main.go:141] libmachine: (auto-766000) Calling .GetConfigRaw
	I1002 04:19:44.067091   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:44.067242   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:44.067398   15148 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1002 04:19:44.067418   15148 main.go:141] libmachine: (auto-766000) Calling .GetState
	I1002 04:19:44.067544   15148 main.go:141] libmachine: (auto-766000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:19:44.067615   15148 main.go:141] libmachine: (auto-766000) DBG | hyperkit pid from json: 15157
	I1002 04:19:44.068745   15148 main.go:141] libmachine: Detecting operating system of created instance...
	I1002 04:19:44.068755   15148 main.go:141] libmachine: Waiting for SSH to be available...
	I1002 04:19:44.068784   15148 main.go:141] libmachine: Getting to WaitForSSH function...
	I1002 04:19:44.068789   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.068885   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.069010   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.069090   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.069261   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.069435   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.069791   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.069799   15148 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1002 04:19:44.133995   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 04:19:44.134012   15148 main.go:141] libmachine: Detecting the provisioner...
	I1002 04:19:44.134018   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.134214   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.134314   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.134411   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.134499   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.134635   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.134889   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.134898   15148 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1002 04:19:44.199007   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1002 04:19:44.199062   15148 main.go:141] libmachine: found compatible host: buildroot
	I1002 04:19:44.199069   15148 main.go:141] libmachine: Provisioning with buildroot...
	I1002 04:19:44.199075   15148 main.go:141] libmachine: (auto-766000) Calling .GetMachineName
	I1002 04:19:44.199218   15148 buildroot.go:166] provisioning hostname "auto-766000"
	I1002 04:19:44.199230   15148 main.go:141] libmachine: (auto-766000) Calling .GetMachineName
	I1002 04:19:44.199315   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.199404   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.199489   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.199576   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.199649   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.199795   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.200073   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.200083   15148 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-766000 && echo "auto-766000" | sudo tee /etc/hostname
	I1002 04:19:44.270045   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-766000
	
	I1002 04:19:44.270065   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.270215   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.270328   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.270433   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.270521   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.270657   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.270897   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.270909   15148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-766000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-766000/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-766000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 04:19:44.339629   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 04:19:44.339646   15148 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-9782/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-9782/.minikube}
	I1002 04:19:44.339661   15148 buildroot.go:174] setting up certificates
	I1002 04:19:44.339672   15148 provision.go:83] configureAuth start
	I1002 04:19:44.339679   15148 main.go:141] libmachine: (auto-766000) Calling .GetMachineName
	I1002 04:19:44.339814   15148 main.go:141] libmachine: (auto-766000) Calling .GetIP
	I1002 04:19:44.339916   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.340017   15148 provision.go:138] copyHostCerts
	I1002 04:19:44.340097   15148 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem, removing ...
	I1002 04:19:44.340109   15148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem
	I1002 04:19:44.340231   15148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem (1679 bytes)
	I1002 04:19:44.340446   15148 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem, removing ...
	I1002 04:19:44.340452   15148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem
	I1002 04:19:44.340526   15148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem (1078 bytes)
	I1002 04:19:44.340688   15148 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem, removing ...
	I1002 04:19:44.340694   15148 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem
	I1002 04:19:44.340760   15148 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem (1123 bytes)
	I1002 04:19:44.340898   15148 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem org=jenkins.auto-766000 san=[192.168.70.59 192.168.70.59 localhost 127.0.0.1 minikube auto-766000]
	I1002 04:19:44.463101   15148 provision.go:172] copyRemoteCerts
	I1002 04:19:44.463155   15148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 04:19:44.463172   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.463375   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.463564   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.463688   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.463772   15148 sshutil.go:53] new ssh client: &{IP:192.168.70.59 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/id_rsa Username:docker}
	I1002 04:19:44.502576   15148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 04:19:44.517892   15148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 04:19:44.533137   15148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 04:19:44.548872   15148 provision.go:86] duration metric: configureAuth took 209.182546ms
	I1002 04:19:44.548883   15148 buildroot.go:189] setting minikube options for container-runtime
	I1002 04:19:44.549004   15148 config.go:182] Loaded profile config "auto-766000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 04:19:44.549016   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:44.549152   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.549255   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.549334   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.549416   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.549509   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.549638   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.550117   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.550125   15148 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 04:19:44.614859   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 04:19:44.614874   15148 buildroot.go:70] root file system type: tmpfs
	I1002 04:19:44.614961   15148 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 04:19:44.614979   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.615103   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.615200   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.615298   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.615395   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.615522   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.615765   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.615814   15148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 04:19:44.687179   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 04:19:44.687199   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:44.687335   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:44.687417   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.687507   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:44.687613   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:44.687732   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:44.687981   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:44.687994   15148 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 04:19:45.180803   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 04:19:45.180827   15148 main.go:141] libmachine: Checking connection to Docker...
	I1002 04:19:45.180834   15148 main.go:141] libmachine: (auto-766000) Calling .GetURL
	I1002 04:19:45.180980   15148 main.go:141] libmachine: Docker is up and running!
	I1002 04:19:45.180987   15148 main.go:141] libmachine: Reticulating splines...
	I1002 04:19:45.180992   15148 client.go:171] LocalClient.Create took 11.918158737s
	I1002 04:19:45.181002   15148 start.go:167] duration metric: libmachine.API.Create for "auto-766000" took 11.91820658s
	I1002 04:19:45.181017   15148 start.go:300] post-start starting for "auto-766000" (driver="hyperkit")
	I1002 04:19:45.181027   15148 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 04:19:45.181037   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:45.181191   15148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 04:19:45.181202   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:45.181285   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:45.181377   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:45.181457   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:45.181551   15148 sshutil.go:53] new ssh client: &{IP:192.168.70.59 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/id_rsa Username:docker}
	I1002 04:19:45.220814   15148 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 04:19:45.223429   15148 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 04:19:45.223441   15148 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/addons for local assets ...
	I1002 04:19:45.223529   15148 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/files for local assets ...
	I1002 04:19:45.223689   15148 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem -> 102442.pem in /etc/ssl/certs
	I1002 04:19:45.223878   15148 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 04:19:45.229927   15148 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem --> /etc/ssl/certs/102442.pem (1708 bytes)
	I1002 04:19:45.245413   15148 start.go:303] post-start completed in 64.386949ms
	I1002 04:19:45.245436   15148 main.go:141] libmachine: (auto-766000) Calling .GetConfigRaw
	I1002 04:19:45.246019   15148 main.go:141] libmachine: (auto-766000) Calling .GetIP
	I1002 04:19:45.246173   15148 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/auto-766000/config.json ...
	I1002 04:19:45.246487   15148 start.go:128] duration metric: createHost completed in 12.01429124s
	I1002 04:19:45.246505   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:45.246620   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:45.246722   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:45.246790   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:45.246862   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:45.246965   15148 main.go:141] libmachine: Using SSH client type: native
	I1002 04:19:45.247203   15148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.59 22 <nil> <nil>}
	I1002 04:19:45.247211   15148 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 04:19:45.316066   15148 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696245585.480737053
	
	I1002 04:19:45.316078   15148 fix.go:206] guest clock: 1696245585.480737053
	I1002 04:19:45.316083   15148 fix.go:219] Guest: 2023-10-02 04:19:45.480737053 -0700 PDT Remote: 2023-10-02 04:19:45.246496 -0700 PDT m=+12.668200230 (delta=234.241053ms)
	I1002 04:19:45.316104   15148 fix.go:190] guest clock delta is within tolerance: 234.241053ms
	I1002 04:19:45.316108   15148 start.go:83] releasing machines lock for "auto-766000", held for 12.083983295s
	I1002 04:19:45.316128   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:45.316267   15148 main.go:141] libmachine: (auto-766000) Calling .GetIP
	I1002 04:19:45.316427   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:45.316768   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:45.316892   15148 main.go:141] libmachine: (auto-766000) Calling .DriverName
	I1002 04:19:45.317056   15148 ssh_runner.go:195] Run: cat /version.json
	I1002 04:19:45.317074   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:45.317175   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:45.317265   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:45.317368   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:45.317446   15148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 04:19:45.317475   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHHostname
	I1002 04:19:45.317473   15148 sshutil.go:53] new ssh client: &{IP:192.168.70.59 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/id_rsa Username:docker}
	I1002 04:19:45.317569   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHPort
	I1002 04:19:45.317670   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHKeyPath
	I1002 04:19:45.317771   15148 main.go:141] libmachine: (auto-766000) Calling .GetSSHUsername
	I1002 04:19:45.317867   15148 sshutil.go:53] new ssh client: &{IP:192.168.70.59 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/auto-766000/id_rsa Username:docker}
	I1002 04:19:45.350240   15148 ssh_runner.go:195] Run: systemctl --version
	I1002 04:19:45.354166   15148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 04:19:45.397925   15148 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 04:19:45.398068   15148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 04:19:45.410133   15148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 04:19:45.410159   15148 start.go:469] detecting cgroup driver to use...
	I1002 04:19:45.410337   15148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:19:45.424433   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 04:19:45.432110   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 04:19:45.439815   15148 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 04:19:45.439876   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 04:19:45.448000   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:19:45.455897   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 04:19:45.463805   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:19:45.472359   15148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 04:19:45.480502   15148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 04:19:45.488433   15148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 04:19:45.496339   15148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 04:19:45.503701   15148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:19:45.592075   15148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 04:19:45.604295   15148 start.go:469] detecting cgroup driver to use...
	I1002 04:19:45.604374   15148 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 04:19:45.615447   15148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:19:45.628586   15148 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 04:19:45.642436   15148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:19:45.652150   15148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 04:19:45.662114   15148 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 04:19:45.687528   15148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 04:19:45.698261   15148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:19:45.712324   15148 ssh_runner.go:195] Run: which cri-dockerd
	I1002 04:19:45.715064   15148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 04:19:45.721199   15148 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 04:19:45.732831   15148 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 04:19:45.817884   15148 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 04:19:45.906981   15148 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 04:19:45.907064   15148 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 04:19:45.918895   15148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:19:46.027389   15148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 04:19:47.288355   15148 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.260921363s)
	I1002 04:19:47.288432   15148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 04:19:47.376915   15148 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 04:19:47.462295   15148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 04:19:47.562802   15148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:19:47.664138   15148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 04:19:47.697063   15148 out.go:177] 
	W1002 04:19:47.718930   15148 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1002 04:19:47.718941   15148 out.go:239] * 
	* 
	W1002 04:19:47.719730   15148 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 04:19:47.783974   15148 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/auto/Start (15.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-150000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-150000 "sudo crictl images -o json": exit status 1 (133.11528ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-150000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-150000 -n old-k8s-version-150000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-150000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-150000 logs -n 25: (2.356765086s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-766000 sudo                                 | kubenet-766000               | jenkins | v1.31.2 | 02 Oct 23 04:25 PDT | 02 Oct 23 04:25 PDT |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p kubenet-766000 sudo find                            | kubenet-766000               | jenkins | v1.31.2 | 02 Oct 23 04:25 PDT | 02 Oct 23 04:25 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p kubenet-766000 sudo crio                            | kubenet-766000               | jenkins | v1.31.2 | 02 Oct 23 04:25 PDT | 02 Oct 23 04:25 PDT |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p kubenet-766000                                      | kubenet-766000               | jenkins | v1.31.2 | 02 Oct 23 04:25 PDT | 02 Oct 23 04:26 PDT |
	| start   | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:26 PDT | 02 Oct 23 04:26 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                              |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-803000            | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:27 PDT | 02 Oct 23 04:27 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:27 PDT | 02 Oct 23 04:27 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-803000                 | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:27 PDT | 02 Oct 23 04:27 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:27 PDT | 02 Oct 23 04:32 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --embed-certs                              |                              |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-150000        | old-k8s-version-150000       | jenkins | v1.31.2 | 02 Oct 23 04:28 PDT | 02 Oct 23 04:28 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-150000                              | old-k8s-version-150000       | jenkins | v1.31.2 | 02 Oct 23 04:28 PDT | 02 Oct 23 04:28 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-150000             | old-k8s-version-150000       | jenkins | v1.31.2 | 02 Oct 23 04:28 PDT | 02 Oct 23 04:28 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-150000                              | old-k8s-version-150000       | jenkins | v1.31.2 | 02 Oct 23 04:28 PDT | 02 Oct 23 04:36 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-803000 sudo                             | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	| delete  | -p embed-certs-803000                                  | embed-certs-803000           | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	| delete  | -p                                                     | disable-driver-mounts-759000 | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:32 PDT |
	|         | disable-driver-mounts-759000                           |                              |         |         |                     |                     |
	| start   | -p no-preload-113000                                   | no-preload-113000            | jenkins | v1.31.2 | 02 Oct 23 04:32 PDT | 02 Oct 23 04:34 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-113000             | no-preload-113000            | jenkins | v1.31.2 | 02 Oct 23 04:34 PDT | 02 Oct 23 04:34 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-113000                                   | no-preload-113000            | jenkins | v1.31.2 | 02 Oct 23 04:34 PDT | 02 Oct 23 04:34 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-113000                  | no-preload-113000            | jenkins | v1.31.2 | 02 Oct 23 04:34 PDT | 02 Oct 23 04:34 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-113000                                   | no-preload-113000            | jenkins | v1.31.2 | 02 Oct 23 04:34 PDT |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=hyperkit                                      |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-150000 sudo                         | old-k8s-version-150000       | jenkins | v1.31.2 | 02 Oct 23 04:36 PDT |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 04:34:23
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 04:34:23.480977   18146 out.go:296] Setting OutFile to fd 1 ...
	I1002 04:34:23.481366   18146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:34:23.481372   18146 out.go:309] Setting ErrFile to fd 2...
	I1002 04:34:23.481376   18146 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:34:23.481551   18146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 04:34:23.482932   18146 out.go:303] Setting JSON to false
	I1002 04:34:23.506309   18146 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":7431,"bootTime":1696239032,"procs":493,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 04:34:23.506404   18146 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 04:34:23.528638   18146 out.go:177] * [no-preload-113000] minikube v1.31.2 on Darwin 14.0
	I1002 04:34:23.571487   18146 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 04:34:23.571550   18146 notify.go:220] Checking for updates...
	I1002 04:34:23.615457   18146 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:34:23.637484   18146 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 04:34:23.658727   18146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 04:34:23.680487   18146 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 04:34:23.703595   18146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 04:34:23.727273   18146 config.go:182] Loaded profile config "no-preload-113000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 04:34:23.727969   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:23.728051   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:23.737001   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64073
	I1002 04:34:23.737377   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:23.737799   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:34:23.737811   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:23.738020   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:23.738127   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:23.738321   18146 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 04:34:23.738571   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:23.738591   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:23.746184   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64075
	I1002 04:34:23.746547   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:23.746914   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:34:23.746927   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:23.747147   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:23.747248   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:23.775446   18146 out.go:177] * Using the hyperkit driver based on existing profile
	I1002 04:34:23.817617   18146 start.go:298] selected driver: hyperkit
	I1002 04:34:23.817643   18146 start.go:902] validating driver "hyperkit" against &{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-113000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.70 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 04:34:23.817821   18146 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 04:34:23.823953   18146 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.824075   18146 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17340-9782/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1002 04:34:23.832394   18146 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1002 04:34:23.837428   18146 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:23.837447   18146 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1002 04:34:23.837576   18146 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 04:34:23.837606   18146 cni.go:84] Creating CNI manager for ""
	I1002 04:34:23.837619   18146 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 04:34:23.837632   18146 start_flags.go:321] config:
	{Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:no-preload-113000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.70 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 04:34:23.837771   18146 iso.go:125] acquiring lock: {Name:mkb1616e5312c7f7300d9edabdcb664e7c56c074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.881544   18146 out.go:177] * Starting control plane node no-preload-113000 in cluster no-preload-113000
	I1002 04:34:23.903496   18146 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 04:34:23.903727   18146 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/config.json ...
	I1002 04:34:23.903842   18146 cache.go:107] acquiring lock: {Name:mkca834c6a96af79b67e4c2f6135afd242f71a6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.903882   18146 cache.go:107] acquiring lock: {Name:mk9f8ab6279abc73a4e58d96e05b9262795a19f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.903915   18146 cache.go:107] acquiring lock: {Name:mk7a2388b84ea6c23ceeec968cbbfc5c8b8f39a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904095   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 04:34:23.904139   18146 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 302.044µs
	I1002 04:34:23.904146   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 exists
	I1002 04:34:23.904165   18146 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 04:34:23.904175   18146 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2" took 329.054µs
	I1002 04:34:23.904187   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 exists
	I1002 04:34:23.904195   18146 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.2 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.2 succeeded
	I1002 04:34:23.904211   18146 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0" took 332.271µs
	I1002 04:34:23.904234   18146 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1002 04:34:23.904220   18146 cache.go:107] acquiring lock: {Name:mkefc61f6f358fee94ed1b50726351df36311df2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904252   18146 cache.go:107] acquiring lock: {Name:mk9923e66c9fc61f82e7e23a5e4a49e7e967efa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904299   18146 cache.go:107] acquiring lock: {Name:mkd2f7f57070dfbfd60926c73cf5eb1c35ab2a25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904228   18146 cache.go:107] acquiring lock: {Name:mkcf652a62d52166f22704e345e8b965c00c4711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904228   18146 cache.go:107] acquiring lock: {Name:mk9fd28d352ca916dc941366e9e02fa37d58645e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 04:34:23.904540   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 exists
	I1002 04:34:23.904584   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1002 04:34:23.904569   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1002 04:34:23.904575   18146 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2" took 400.611µs
	I1002 04:34:23.904611   18146 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.2 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.2 succeeded
	I1002 04:34:23.904612   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 exists
	I1002 04:34:23.904612   18146 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 541.262µs
	I1002 04:34:23.904610   18146 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 552.291µs
	I1002 04:34:23.904642   18146 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1002 04:34:23.904647   18146 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1002 04:34:23.904627   18146 cache.go:115] /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 exists
	I1002 04:34:23.904639   18146 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2" took 409.425µs
	I1002 04:34:23.904666   18146 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.2 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.2 succeeded
	I1002 04:34:23.904675   18146 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.2" -> "/Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2" took 557.138µs
	I1002 04:34:23.904687   18146 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.2 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.2 succeeded
	I1002 04:34:23.904712   18146 cache.go:87] Successfully saved all images to host disk.
	I1002 04:34:23.905017   18146 start.go:365] acquiring machines lock for no-preload-113000: {Name:mk5657db51c0d6006a9e01bb2a1802e115658af0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 04:34:23.905108   18146 start.go:369] acquired machines lock for "no-preload-113000" in 72.596µs
	I1002 04:34:23.905147   18146 start.go:96] Skipping create...Using existing machine configuration
	I1002 04:34:23.905162   18146 fix.go:54] fixHost starting: 
	I1002 04:34:23.905589   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:23.905619   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:23.913988   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64077
	I1002 04:34:23.914353   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:23.914718   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:34:23.914733   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:23.914976   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:23.915097   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:23.915198   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:34:23.915287   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:23.915341   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 17900
	I1002 04:34:23.916381   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid 17900 missing from process table
	I1002 04:34:23.916411   18146 fix.go:102] recreateIfNeeded on no-preload-113000: state=Stopped err=<nil>
	I1002 04:34:23.916428   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	W1002 04:34:23.916507   18146 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 04:34:23.938288   18146 out.go:177] * Restarting existing hyperkit VM for "no-preload-113000" ...
	I1002 04:34:23.982125   18146 main.go:141] libmachine: (no-preload-113000) Calling .Start
	I1002 04:34:23.982450   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:23.982502   18146 main.go:141] libmachine: (no-preload-113000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/hyperkit.pid
	I1002 04:34:23.984324   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid 17900 missing from process table
	I1002 04:34:23.984350   18146 main.go:141] libmachine: (no-preload-113000) DBG | pid 17900 is in state "Stopped"
	I1002 04:34:23.984368   18146 main.go:141] libmachine: (no-preload-113000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/hyperkit.pid...
	I1002 04:34:23.984431   18146 main.go:141] libmachine: (no-preload-113000) DBG | Using UUID 5f4c4e84-6117-11ee-94d3-149d997cd0f1
	I1002 04:34:24.003214   18146 main.go:141] libmachine: (no-preload-113000) DBG | Generated MAC 56:42:5b:9b:63:29
	I1002 04:34:24.003255   18146 main.go:141] libmachine: (no-preload-113000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-113000
	I1002 04:34:24.003467   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f4c4e84-6117-11ee-94d3-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00042bcb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.
Process)(nil)}
	I1002 04:34:24.003510   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5f4c4e84-6117-11ee-94d3-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00042bcb0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/bzimage", Initrd:"/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.
Process)(nil)}
	I1002 04:34:24.003560   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5f4c4e84-6117-11ee-94d3-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/no-preload-113000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/bzimage,/Users/jenkins/minikube-integrat
ion/17340-9782/.minikube/machines/no-preload-113000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-113000"}
	I1002 04:34:24.003611   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5f4c4e84-6117-11ee-94d3-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/no-preload-113000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/tty,log=/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/console-ring -f kexec,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/bzimage,/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/initrd,ear
lyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-113000"
	I1002 04:34:24.003631   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I1002 04:34:24.005091   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 DEBUG: hyperkit: Pid is 18157
	I1002 04:34:24.005554   18146 main.go:141] libmachine: (no-preload-113000) DBG | Attempt 0
	I1002 04:34:24.005569   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:24.005674   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:34:24.007486   18146 main.go:141] libmachine: (no-preload-113000) DBG | Searching for 56:42:5b:9b:63:29 in /var/db/dhcpd_leases ...
	I1002 04:34:24.007592   18146 main.go:141] libmachine: (no-preload-113000) DBG | Found 85 entries in /var/db/dhcpd_leases!
	I1002 04:34:24.007611   18146 main.go:141] libmachine: (no-preload-113000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.70.70 HWAddress:56:42:5b:9b:63:29 ID:1,56:42:5b:9b:63:29 Lease:0x651bfbd7}
	I1002 04:34:24.007630   18146 main.go:141] libmachine: (no-preload-113000) DBG | Found match: 56:42:5b:9b:63:29
	I1002 04:34:24.007641   18146 main.go:141] libmachine: (no-preload-113000) DBG | IP: 192.168.70.70
	I1002 04:34:24.007702   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetConfigRaw
	I1002 04:34:24.008346   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetIP
	I1002 04:34:24.008579   18146 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/config.json ...
	I1002 04:34:24.008985   18146 machine.go:88] provisioning docker machine ...
	I1002 04:34:24.008996   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:24.009129   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetMachineName
	I1002 04:34:24.009235   18146 buildroot.go:166] provisioning hostname "no-preload-113000"
	I1002 04:34:24.009251   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetMachineName
	I1002 04:34:24.009375   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:24.009478   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:24.009574   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:24.009711   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:24.009795   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:24.010191   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:24.010476   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:24.010488   18146 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-113000 && echo "no-preload-113000" | sudo tee /etc/hostname
	I1002 04:34:24.013508   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I1002 04:34:24.021596   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I1002 04:34:24.022616   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:34:24.022637   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:34:24.022662   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:34:24.022683   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:34:24.393101   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I1002 04:34:24.393133   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I1002 04:34:24.497132   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I1002 04:34:24.497154   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I1002 04:34:24.497167   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I1002 04:34:24.497182   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I1002 04:34:24.498125   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I1002 04:34:24.498138   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:24 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I1002 04:34:29.357021   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I1002 04:34:29.357039   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I1002 04:34:29.357048   18146 main.go:141] libmachine: (no-preload-113000) DBG | 2023/10/02 04:34:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I1002 04:34:37.212512   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-113000
	
	I1002 04:34:37.212540   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.212738   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:37.212870   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.213004   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.213120   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:37.213250   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:37.213520   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:37.213533   18146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-113000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-113000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-113000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 04:34:37.299032   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 04:34:37.299051   18146 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17340-9782/.minikube CaCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17340-9782/.minikube}
	I1002 04:34:37.299084   18146 buildroot.go:174] setting up certificates
	I1002 04:34:37.299096   18146 provision.go:83] configureAuth start
	I1002 04:34:37.299104   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetMachineName
	I1002 04:34:37.299242   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetIP
	I1002 04:34:37.299335   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.299423   18146 provision.go:138] copyHostCerts
	I1002 04:34:37.299505   18146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem, removing ...
	I1002 04:34:37.299516   18146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem
	I1002 04:34:37.299643   18146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.pem (1078 bytes)
	I1002 04:34:37.299894   18146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem, removing ...
	I1002 04:34:37.299901   18146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem
	I1002 04:34:37.299970   18146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/cert.pem (1123 bytes)
	I1002 04:34:37.300128   18146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem, removing ...
	I1002 04:34:37.300134   18146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem
	I1002 04:34:37.300204   18146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17340-9782/.minikube/key.pem (1679 bytes)
	I1002 04:34:37.300339   18146 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem org=jenkins.no-preload-113000 san=[192.168.70.70 192.168.70.70 localhost 127.0.0.1 minikube no-preload-113000]
	I1002 04:34:37.413618   18146 provision.go:172] copyRemoteCerts
	I1002 04:34:37.413683   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 04:34:37.413700   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.413886   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:37.414068   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.414275   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:37.414429   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:34:37.458844   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 04:34:37.475033   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1002 04:34:37.490773   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 04:34:37.506400   18146 provision.go:86] duration metric: configureAuth took 207.2804ms
	I1002 04:34:37.506415   18146 buildroot.go:189] setting minikube options for container-runtime
	I1002 04:34:37.506550   18146 config.go:182] Loaded profile config "no-preload-113000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 04:34:37.506563   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:37.506703   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.506813   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:37.506905   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.506994   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.507100   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:37.507220   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:37.507463   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:37.507471   18146 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 04:34:37.588410   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 04:34:37.588423   18146 buildroot.go:70] root file system type: tmpfs
	I1002 04:34:37.588507   18146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 04:34:37.588524   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.588656   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:37.588747   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.588833   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.588923   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:37.589064   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:37.589317   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:37.589363   18146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 04:34:37.676354   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 04:34:37.676378   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:37.676506   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:37.676599   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.676690   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:37.676780   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:37.676904   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:37.677152   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:37.677165   18146 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 04:34:38.280627   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 04:34:38.280659   18146 machine.go:91] provisioned docker machine in 14.271376223s
	I1002 04:34:38.280671   18146 start.go:300] post-start starting for "no-preload-113000" (driver="hyperkit")
	I1002 04:34:38.280709   18146 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 04:34:38.280720   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:38.281088   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 04:34:38.281118   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:38.281251   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:38.281405   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:38.281525   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:38.281603   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:34:38.327829   18146 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 04:34:38.330479   18146 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 04:34:38.330496   18146 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/addons for local assets ...
	I1002 04:34:38.330592   18146 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17340-9782/.minikube/files for local assets ...
	I1002 04:34:38.331250   18146 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem -> 102442.pem in /etc/ssl/certs
	I1002 04:34:38.331439   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 04:34:38.338025   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem --> /etc/ssl/certs/102442.pem (1708 bytes)
	I1002 04:34:38.354730   18146 start.go:303] post-start completed in 74.046572ms
	I1002 04:34:38.354746   18146 fix.go:56] fixHost completed within 14.449296939s
	I1002 04:34:38.354762   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:38.354892   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:38.354986   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:38.355073   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:38.355147   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:38.355270   18146 main.go:141] libmachine: Using SSH client type: native
	I1002 04:34:38.355549   18146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil>  [] 0s} 192.168.70.70 22 <nil> <nil>}
	I1002 04:34:38.355557   18146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 04:34:38.433341   18146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696246478.607093532
	
	I1002 04:34:38.433351   18146 fix.go:206] guest clock: 1696246478.607093532
	I1002 04:34:38.433356   18146 fix.go:219] Guest: 2023-10-02 04:34:38.607093532 -0700 PDT Remote: 2023-10-02 04:34:38.354751 -0700 PDT m=+14.906970943 (delta=252.342532ms)
	I1002 04:34:38.433368   18146 fix.go:190] guest clock delta is within tolerance: 252.342532ms
	I1002 04:34:38.433372   18146 start.go:83] releasing machines lock for "no-preload-113000", held for 14.527957972s
	I1002 04:34:38.433392   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:38.433523   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetIP
	I1002 04:34:38.433629   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:38.433927   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:38.434023   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:34:38.434162   18146 ssh_runner.go:195] Run: cat /version.json
	I1002 04:34:38.434180   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:38.434245   18146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 04:34:38.434276   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:34:38.434283   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:38.434367   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:34:38.434384   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:38.434470   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:34:38.434481   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:38.434574   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:34:38.434592   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:34:38.434660   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:34:38.477197   18146 ssh_runner.go:195] Run: systemctl --version
	I1002 04:34:38.522867   18146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 04:34:38.526806   18146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 04:34:38.526862   18146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 04:34:38.538158   18146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 04:34:38.538178   18146 start.go:469] detecting cgroup driver to use...
	I1002 04:34:38.538291   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:34:38.551936   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 04:34:38.559064   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 04:34:38.566344   18146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 04:34:38.566398   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 04:34:38.573618   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:34:38.581008   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 04:34:38.588229   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 04:34:38.595291   18146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 04:34:38.602716   18146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 04:34:38.610180   18146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 04:34:38.616568   18146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 04:34:38.624010   18146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:34:38.710401   18146 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 04:34:38.722472   18146 start.go:469] detecting cgroup driver to use...
	I1002 04:34:38.722569   18146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 04:34:38.734804   18146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:34:38.746345   18146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 04:34:38.761709   18146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 04:34:38.771238   18146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 04:34:38.780400   18146 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 04:34:38.869324   18146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 04:34:38.878164   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 04:34:38.890453   18146 ssh_runner.go:195] Run: which cri-dockerd
	I1002 04:34:38.892835   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 04:34:38.898641   18146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 04:34:38.910098   18146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 04:34:38.996487   18146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 04:34:39.081572   18146 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 04:34:39.081679   18146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 04:34:39.093141   18146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:34:39.178994   18146 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 04:34:40.484843   18146 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.305756591s)
	I1002 04:34:40.484954   18146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 04:34:40.574421   18146 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 04:34:40.662020   18146 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 04:34:40.757233   18146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:34:40.841464   18146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 04:34:40.853406   18146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 04:34:40.941444   18146 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 04:34:40.995746   18146 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 04:34:40.996542   18146 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 04:34:41.000325   18146 start.go:537] Will wait 60s for crictl version
	I1002 04:34:41.000374   18146 ssh_runner.go:195] Run: which crictl
	I1002 04:34:41.002945   18146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 04:34:41.042139   18146 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 04:34:41.042212   18146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 04:34:41.059135   18146 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 04:34:40.348377   17532 kubeadm.go:322] [apiclient] All control plane components are healthy after 28.501692 seconds
	I1002 04:34:40.348469   17532 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 04:34:40.357118   17532 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 04:34:40.877478   17532 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 04:34:40.877619   17532 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-150000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 04:34:41.383994   17532 kubeadm.go:322] [bootstrap-token] Using token: 3j8rwx.ns5b2v1m1edpir8s
	I1002 04:34:41.420755   17532 out.go:204]   - Configuring RBAC rules ...
	I1002 04:34:41.420833   17532 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 04:34:41.420954   17532 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 04:34:41.424460   17532 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 04:34:41.427523   17532 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 04:34:41.435848   17532 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 04:34:41.499449   17532 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 04:34:41.795401   17532 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 04:34:41.796256   17532 kubeadm.go:322] 
	I1002 04:34:41.796310   17532 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 04:34:41.796324   17532 kubeadm.go:322] 
	I1002 04:34:41.796388   17532 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 04:34:41.796394   17532 kubeadm.go:322] 
	I1002 04:34:41.796416   17532 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 04:34:41.796467   17532 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 04:34:41.796503   17532 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 04:34:41.796515   17532 kubeadm.go:322] 
	I1002 04:34:41.796559   17532 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 04:34:41.796628   17532 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 04:34:41.796681   17532 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 04:34:41.796690   17532 kubeadm.go:322] 
	I1002 04:34:41.796757   17532 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1002 04:34:41.796824   17532 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 04:34:41.796829   17532 kubeadm.go:322] 
	I1002 04:34:41.796890   17532 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3j8rwx.ns5b2v1m1edpir8s \
	I1002 04:34:41.796971   17532 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f17a2c7186b92552183f5387e3508b9b991a200a76a694c3f9865d783ec73927 \
	I1002 04:34:41.796993   17532 kubeadm.go:322]     --control-plane 	  
	I1002 04:34:41.796998   17532 kubeadm.go:322] 
	I1002 04:34:41.797061   17532 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 04:34:41.797067   17532 kubeadm.go:322] 
	I1002 04:34:41.797135   17532 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3j8rwx.ns5b2v1m1edpir8s \
	I1002 04:34:41.797227   17532 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f17a2c7186b92552183f5387e3508b9b991a200a76a694c3f9865d783ec73927 
	I1002 04:34:41.797426   17532 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1002 04:34:41.797544   17532 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1002 04:34:41.797627   17532 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 04:34:41.797637   17532 cni.go:84] Creating CNI manager for ""
	I1002 04:34:41.797647   17532 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 04:34:41.797660   17532 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 04:34:41.797713   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:41.797715   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18 minikube.k8s.io/name=old-k8s-version-150000 minikube.k8s.io/updated_at=2023_10_02T04_34_41_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:41.809141   17532 ops.go:34] apiserver oom_adj: -16
	I1002 04:34:41.984526   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:42.057975   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:41.100568   18146 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 04:34:41.100603   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetIP
	I1002 04:34:41.101096   18146 ssh_runner.go:195] Run: grep 192.168.70.1	host.minikube.internal$ /etc/hosts
	I1002 04:34:41.103877   18146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.70.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 04:34:41.113101   18146 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 04:34:41.113163   18146 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 04:34:41.126375   18146 docker.go:664] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1002 04:34:41.126400   18146 cache_images.go:84] Images are preloaded, skipping loading
	I1002 04:34:41.126489   18146 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 04:34:41.144375   18146 cni.go:84] Creating CNI manager for ""
	I1002 04:34:41.144391   18146 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 04:34:41.144405   18146 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 04:34:41.144422   18146 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.70.70 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-113000 NodeName:no-preload-113000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.70.70"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.70.70 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 04:34:41.144518   18146 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.70.70
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-113000"
	  kubeletExtraArgs:
	    node-ip: 192.168.70.70
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.70.70"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 04:34:41.144578   18146 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=no-preload-113000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.70.70
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:no-preload-113000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 04:34:41.144637   18146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 04:34:41.150811   18146 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 04:34:41.150856   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 04:34:41.156670   18146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1002 04:34:41.168074   18146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 04:34:41.179478   18146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1002 04:34:41.191067   18146 ssh_runner.go:195] Run: grep 192.168.70.70	control-plane.minikube.internal$ /etc/hosts
	I1002 04:34:41.193622   18146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.70.70	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 04:34:41.201581   18146 certs.go:56] Setting up /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000 for IP: 192.168.70.70
	I1002 04:34:41.201597   18146 certs.go:190] acquiring lock for shared ca certs: {Name:mka21516f01f8997893bde7137827d8bb9b1922b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:34:41.201839   18146 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.key
	I1002 04:34:41.201900   18146 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17340-9782/.minikube/proxy-client-ca.key
	I1002 04:34:41.201988   18146 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/client.key
	I1002 04:34:41.202057   18146 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/apiserver.key.ce4b7353
	I1002 04:34:41.202114   18146 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/proxy-client.key
	I1002 04:34:41.202304   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/10244.pem (1338 bytes)
	W1002 04:34:41.202346   18146 certs.go:433] ignoring /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/10244_empty.pem, impossibly tiny 0 bytes
	I1002 04:34:41.202355   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 04:34:41.202387   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/ca.pem (1078 bytes)
	I1002 04:34:41.202423   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/cert.pem (1123 bytes)
	I1002 04:34:41.202457   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/certs/key.pem (1679 bytes)
	I1002 04:34:41.202519   18146 certs.go:437] found cert: /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem (1708 bytes)
	I1002 04:34:41.203022   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 04:34:41.219808   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 04:34:41.236128   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 04:34:41.252637   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/no-preload-113000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 04:34:41.269296   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 04:34:41.285861   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 04:34:41.302548   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 04:34:41.320286   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 04:34:41.336892   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 04:34:41.353261   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/certs/10244.pem --> /usr/share/ca-certificates/10244.pem (1338 bytes)
	I1002 04:34:41.370056   18146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/ssl/certs/102442.pem --> /usr/share/ca-certificates/102442.pem (1708 bytes)
	I1002 04:34:41.387344   18146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 04:34:41.399949   18146 ssh_runner.go:195] Run: openssl version
	I1002 04:34:41.403672   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 04:34:41.410753   18146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 04:34:41.414048   18146 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 10:41 /usr/share/ca-certificates/minikubeCA.pem
	I1002 04:34:41.414086   18146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 04:34:41.417610   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 04:34:41.424711   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10244.pem && ln -fs /usr/share/ca-certificates/10244.pem /etc/ssl/certs/10244.pem"
	I1002 04:34:41.432357   18146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10244.pem
	I1002 04:34:41.435934   18146 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 10:45 /usr/share/ca-certificates/10244.pem
	I1002 04:34:41.435987   18146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10244.pem
	I1002 04:34:41.440260   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10244.pem /etc/ssl/certs/51391683.0"
	I1002 04:34:41.448227   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/102442.pem && ln -fs /usr/share/ca-certificates/102442.pem /etc/ssl/certs/102442.pem"
	I1002 04:34:41.455958   18146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/102442.pem
	I1002 04:34:41.459536   18146 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 10:45 /usr/share/ca-certificates/102442.pem
	I1002 04:34:41.459613   18146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/102442.pem
	I1002 04:34:41.463813   18146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/102442.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 04:34:41.470856   18146 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 04:34:41.474062   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 04:34:41.478132   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 04:34:41.482133   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 04:34:41.486354   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 04:34:41.490853   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 04:34:41.495377   18146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 04:34:41.499511   18146 kubeadm.go:404] StartCluster: {Name:no-preload-113000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.2 ClusterName:no-preload-113000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.70 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress:
Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 04:34:41.499656   18146 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 04:34:41.512638   18146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 04:34:41.518839   18146 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 04:34:41.518854   18146 kubeadm.go:636] restartCluster start
	I1002 04:34:41.518898   18146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 04:34:41.525219   18146 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:41.525645   18146 kubeconfig.go:135] verify returned: extract IP: "no-preload-113000" does not appear in /Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:34:41.525782   18146 kubeconfig.go:146] "no-preload-113000" context is missing from /Users/jenkins/minikube-integration/17340-9782/kubeconfig - will repair!
	I1002 04:34:41.525998   18146 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-9782/kubeconfig: {Name:mk8fac99ef23914f53f2ba8da6b528e659fdee80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:34:41.527183   18146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 04:34:41.533125   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:41.533203   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:41.541334   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:41.541343   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:41.541428   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:41.549213   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:42.050700   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:42.050838   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:42.059633   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:42.550628   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:42.550850   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:42.560469   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:43.050417   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:43.050526   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:43.058424   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:42.626339   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:43.125584   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:43.625898   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:44.126374   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:44.626961   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:45.125480   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:45.625220   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:46.127118   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:46.626229   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:47.126242   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:43.550924   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:43.551072   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:43.560665   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:44.049982   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:44.050101   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:44.058518   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:44.549822   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:44.549951   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:44.557970   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:45.049440   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:45.049558   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:45.057995   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:45.549423   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:45.549535   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:45.557582   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:46.051049   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:46.051255   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:46.060672   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:46.549877   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:46.550063   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:46.559511   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:47.049486   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:47.049673   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:47.058695   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:47.551106   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:47.551339   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:47.560809   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:48.050486   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:48.050583   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:48.059403   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:47.625140   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:48.126399   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:48.625878   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:49.127206   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:49.626697   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:50.125973   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:50.627201   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:51.125070   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:51.625140   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:52.125192   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:48.550648   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:48.550767   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:48.559590   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:49.049595   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:49.049734   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:49.058691   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:49.549613   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:49.549755   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:49.559460   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:50.049692   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:50.049802   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:50.057782   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:50.549555   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:50.549741   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:50.558588   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:51.051230   18146 api_server.go:166] Checking apiserver status ...
	I1002 04:34:51.051453   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 04:34:51.060947   18146 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 04:34:51.534498   18146 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1002 04:34:51.534558   18146 kubeadm.go:1128] stopping kube-system containers ...
	I1002 04:34:51.534723   18146 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 04:34:51.554374   18146 docker.go:463] Stopping containers: [f0d37c0281ee 816fd9721985 108948c42959 baaee4ba4de1 19d7fbdd92a6 334989c09352 1083465c6231 60dd9e57bddb 8380e8cdcea1 96b4f37a16a7 228eb1acb551 08078cf5b4f8 699cebb30722 130f6ad8dbc9 9fbd5772416a]
	I1002 04:34:51.554448   18146 ssh_runner.go:195] Run: docker stop f0d37c0281ee 816fd9721985 108948c42959 baaee4ba4de1 19d7fbdd92a6 334989c09352 1083465c6231 60dd9e57bddb 8380e8cdcea1 96b4f37a16a7 228eb1acb551 08078cf5b4f8 699cebb30722 130f6ad8dbc9 9fbd5772416a
	I1002 04:34:51.569334   18146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 04:34:51.580307   18146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 04:34:51.586424   18146 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 04:34:51.586470   18146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 04:34:51.592229   18146 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 04:34:51.592238   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:51.669497   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:52.453588   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:52.593319   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:52.639066   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:52.687784   18146 api_server.go:52] waiting for apiserver process to appear ...
	I1002 04:34:52.687869   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:34:52.697087   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:34:53.206263   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:34:52.625772   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:53.126985   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:53.625224   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:54.125503   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:54.625737   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:55.125232   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:55.625435   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:56.125450   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:56.625395   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:57.126320   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:57.625218   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:58.126210   17532 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 04:34:58.217520   17532 kubeadm.go:1081] duration metric: took 16.419590792s to wait for elevateKubeSystemPrivileges.
	I1002 04:34:58.217542   17532 kubeadm.go:406] StartCluster complete in 6m19.462490871s
	I1002 04:34:58.217561   17532 settings.go:142] acquiring lock: {Name:mk0450b6cef6a87f94dc93227a37643f400162b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:34:58.217664   17532 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:34:58.218221   17532 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-9782/kubeconfig: {Name:mk8fac99ef23914f53f2ba8da6b528e659fdee80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:34:58.218515   17532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 04:34:58.218525   17532 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 04:34:58.218576   17532 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-150000"
	I1002 04:34:58.218579   17532 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-150000"
	I1002 04:34:58.218580   17532 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-150000"
	I1002 04:34:58.218595   17532 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-150000"
	I1002 04:34:58.218595   17532 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-150000"
	I1002 04:34:58.218586   17532 addons.go:69] Setting dashboard=true in profile "old-k8s-version-150000"
	W1002 04:34:58.218605   17532 addons.go:240] addon metrics-server should already be in state true
	W1002 04:34:58.218604   17532 addons.go:240] addon storage-provisioner should already be in state true
	I1002 04:34:58.218616   17532 addons.go:231] Setting addon dashboard=true in "old-k8s-version-150000"
	I1002 04:34:58.218602   17532 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-150000"
	W1002 04:34:58.218626   17532 addons.go:240] addon dashboard should already be in state true
	I1002 04:34:58.218647   17532 host.go:66] Checking if "old-k8s-version-150000" exists ...
	I1002 04:34:58.218665   17532 host.go:66] Checking if "old-k8s-version-150000" exists ...
	I1002 04:34:58.218677   17532 config.go:182] Loaded profile config "old-k8s-version-150000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 04:34:58.218669   17532 host.go:66] Checking if "old-k8s-version-150000" exists ...
	I1002 04:34:58.219020   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.219041   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.219048   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.219073   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.219124   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.219151   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.219174   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.219195   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.232545   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64101
	I1002 04:34:58.233017   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64103
	I1002 04:34:58.233322   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.233653   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.234084   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.234124   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.234138   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.234181   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.234499   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.234509   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.234657   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetState
	I1002 04:34:58.234881   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:58.235033   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | hyperkit pid from json: 17543
	I1002 04:34:58.235410   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.235491   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.237189   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64105
	I1002 04:34:58.238749   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64107
	I1002 04:34:58.238944   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.239178   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.239531   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.239553   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.239688   17532 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-150000"
	W1002 04:34:58.239703   17532 addons.go:240] addon default-storageclass should already be in state true
	I1002 04:34:58.239723   17532 host.go:66] Checking if "old-k8s-version-150000" exists ...
	I1002 04:34:58.239793   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.239809   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.239871   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.240112   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.240153   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.240176   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.240352   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.240376   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.241394   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.241595   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.247465   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64109
	I1002 04:34:58.248022   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.248594   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.248614   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.248895   17532 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-150000" context rescaled to 1 replicas
	I1002 04:34:58.248932   17532 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.70.68 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 04:34:58.270428   17532 out.go:177] * Verifying Kubernetes components...
	I1002 04:34:58.248983   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.252351   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64111
	I1002 04:34:58.252995   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64112
	I1002 04:34:58.311236   17532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 04:34:58.253479   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64113
	I1002 04:34:58.270671   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetState
	I1002 04:34:58.311569   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:58.311722   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | hyperkit pid from json: 17543
	I1002 04:34:58.311808   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.311854   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.311873   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:58.312378   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.312395   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.312507   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.312520   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.312523   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.312535   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.312726   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.312833   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.312906   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.312991   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetState
	I1002 04:34:58.313061   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetState
	I1002 04:34:58.313143   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:58.313197   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:58.313252   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | hyperkit pid from json: 17543
	I1002 04:34:58.313294   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | hyperkit pid from json: 17543
	I1002 04:34:58.313352   17532 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:34:58.313382   17532 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:34:58.313416   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .DriverName
	I1002 04:34:58.353452   17532 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 04:34:58.315165   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .DriverName
	I1002 04:34:58.315408   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .DriverName
	I1002 04:34:58.324386   17532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64117
	I1002 04:34:58.395152   17532 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 04:34:58.354106   17532 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:34:53.707313   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:34:54.206204   18146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:34:54.252378   18146 api_server.go:72] duration metric: took 1.564597591s to wait for apiserver process to appear ...
	I1002 04:34:54.252390   18146 api_server.go:88] waiting for apiserver healthz status ...
	I1002 04:34:54.252410   18146 api_server.go:253] Checking apiserver healthz at https://192.168.70.70:8443/healthz ...
	I1002 04:34:56.919896   18146 api_server.go:279] https://192.168.70.70:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 04:34:56.919914   18146 api_server.go:103] status: https://192.168.70.70:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 04:34:56.919954   18146 api_server.go:253] Checking apiserver healthz at https://192.168.70.70:8443/healthz ...
	I1002 04:34:56.963830   18146 api_server.go:279] https://192.168.70.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 04:34:56.963849   18146 api_server.go:103] status: https://192.168.70.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 04:34:57.464268   18146 api_server.go:253] Checking apiserver healthz at https://192.168.70.70:8443/healthz ...
	I1002 04:34:57.469242   18146 api_server.go:279] https://192.168.70.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 04:34:57.469257   18146 api_server.go:103] status: https://192.168.70.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 04:34:57.963983   18146 api_server.go:253] Checking apiserver healthz at https://192.168.70.70:8443/healthz ...
	I1002 04:34:57.969640   18146 api_server.go:279] https://192.168.70.70:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 04:34:57.969653   18146 api_server.go:103] status: https://192.168.70.70:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 04:34:58.464835   18146 api_server.go:253] Checking apiserver healthz at https://192.168.70.70:8443/healthz ...
	I1002 04:34:58.468739   18146 api_server.go:279] https://192.168.70.70:8443/healthz returned 200:
	ok
	I1002 04:34:58.474452   18146 api_server.go:141] control plane version: v1.28.2
	I1002 04:34:58.474464   18146 api_server.go:131] duration metric: took 4.222047557s to wait for apiserver health ...
	I1002 04:34:58.474469   18146 cni.go:84] Creating CNI manager for ""
	I1002 04:34:58.474479   18146 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 04:34:58.399046   17532 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-150000" to be "Ready" ...
	I1002 04:34:58.399328   17532 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.70.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 04:34:58.432699   17532 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 04:34:58.453440   17532 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 04:34:58.454001   17532 main.go:141] libmachine: Using API Version  1
	I1002 04:34:58.490335   17532 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 04:34:58.530440   18146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1002 04:34:58.490364   17532 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:34:58.610458   18146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1002 04:34:58.626279   18146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1002 04:34:58.662787   18146 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 04:34:58.670335   18146 system_pods.go:59] 8 kube-system pods found
	I1002 04:34:58.670358   18146 system_pods.go:61] "coredns-5dd5756b68-ck4h8" [4391b4ae-4a35-495d-866f-68a1d3dbdb33] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 04:34:58.670366   18146 system_pods.go:61] "etcd-no-preload-113000" [a0d36fac-6a12-4b09-aeef-c71ef43f03f2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 04:34:58.670374   18146 system_pods.go:61] "kube-apiserver-no-preload-113000" [0b75c959-8289-484f-a5ec-a4e57776546a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 04:34:58.670380   18146 system_pods.go:61] "kube-controller-manager-no-preload-113000" [11d19c2a-f68f-49c4-bac7-4b49883faf8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 04:34:58.670387   18146 system_pods.go:61] "kube-proxy-ngk77" [c5fe8a67-52e5-4cdc-9551-4ceff860c4ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1002 04:34:58.670393   18146 system_pods.go:61] "kube-scheduler-no-preload-113000" [bcce1df9-cc60-4544-ba51-8f5e4905f531] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 04:34:58.670402   18146 system_pods.go:61] "metrics-server-57f55c9bc5-ls7vw" [44921306-13b6-489b-87d2-552c4baba698] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:34:58.670409   18146 system_pods.go:61] "storage-provisioner" [8a9551c4-66ba-4753-bd64-5d50016b9978] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 04:34:58.670417   18146 system_pods.go:74] duration metric: took 7.618287ms to wait for pod list to return data ...
	I1002 04:34:58.670427   18146 node_conditions.go:102] verifying NodePressure condition ...
	I1002 04:34:58.673262   18146 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 04:34:58.673282   18146 node_conditions.go:123] node cpu capacity is 2
	I1002 04:34:58.673295   18146 node_conditions.go:105] duration metric: took 2.862592ms to run NodePressure ...
	I1002 04:34:58.673313   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 04:34:58.933722   18146 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 04:34:58.938817   18146 kubeadm.go:787] kubelet initialised
	I1002 04:34:58.938830   18146 kubeadm.go:788] duration metric: took 5.095743ms waiting for restarted kubelet to initialise ...
	I1002 04:34:58.938837   18146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 04:34:58.949272   18146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:58.955110   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.955124   18146 pod_ready.go:81] duration metric: took 5.835839ms waiting for pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:58.955131   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.955140   18146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:58.959921   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "etcd-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.959936   18146 pod_ready.go:81] duration metric: took 4.789119ms waiting for pod "etcd-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:58.959943   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "etcd-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.959948   18146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:58.965796   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "kube-apiserver-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.965814   18146 pod_ready.go:81] duration metric: took 5.860734ms waiting for pod "kube-apiserver-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:58.965826   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "kube-apiserver-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:58.965834   18146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:59.066580   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.066596   18146 pod_ready.go:81] duration metric: took 100.754502ms waiting for pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:59.066603   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.066610   18146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ngk77" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:59.467059   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "kube-proxy-ngk77" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.467072   18146 pod_ready.go:81] duration metric: took 400.456299ms waiting for pod "kube-proxy-ngk77" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:59.467099   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "kube-proxy-ngk77" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.467104   18146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:59.866202   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "kube-scheduler-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.866217   18146 pod_ready.go:81] duration metric: took 399.106927ms waiting for pod "kube-scheduler-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	E1002 04:34:59.866224   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "kube-scheduler-no-preload-113000" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:34:59.866233   18146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:00.267137   18146 pod_ready.go:97] node "no-preload-113000" hosting pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:35:00.267172   18146 pod_ready.go:81] duration metric: took 400.912156ms waiting for pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace to be "Ready" ...
	E1002 04:35:00.267180   18146 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-113000" hosting pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-113000" has status "Ready":"False"
	I1002 04:35:00.267186   18146 pod_ready.go:38] duration metric: took 1.328339525s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 04:35:00.267203   18146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 04:35:00.276046   18146 ops.go:34] apiserver oom_adj: -16
	I1002 04:35:00.276058   18146 kubeadm.go:640] restartCluster took 18.756927013s
	I1002 04:35:00.276064   18146 kubeadm.go:406] StartCluster complete in 18.776288324s
	I1002 04:35:00.276080   18146 settings.go:142] acquiring lock: {Name:mk0450b6cef6a87f94dc93227a37643f400162b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:35:00.276165   18146 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 04:35:00.276718   18146 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17340-9782/kubeconfig: {Name:mk8fac99ef23914f53f2ba8da6b528e659fdee80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 04:35:00.277013   18146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 04:35:00.277056   18146 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 04:35:00.277115   18146 addons.go:69] Setting storage-provisioner=true in profile "no-preload-113000"
	I1002 04:35:00.277121   18146 addons.go:69] Setting default-storageclass=true in profile "no-preload-113000"
	I1002 04:35:00.277128   18146 addons.go:231] Setting addon storage-provisioner=true in "no-preload-113000"
	W1002 04:35:00.277135   18146 addons.go:240] addon storage-provisioner should already be in state true
	I1002 04:35:00.277143   18146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-113000"
	I1002 04:35:00.277159   18146 addons.go:69] Setting dashboard=true in profile "no-preload-113000"
	I1002 04:35:00.277181   18146 host.go:66] Checking if "no-preload-113000" exists ...
	I1002 04:35:00.277194   18146 addons.go:231] Setting addon dashboard=true in "no-preload-113000"
	I1002 04:35:00.277185   18146 addons.go:69] Setting metrics-server=true in profile "no-preload-113000"
	W1002 04:35:00.277218   18146 addons.go:240] addon dashboard should already be in state true
	I1002 04:35:00.277315   18146 config.go:182] Loaded profile config "no-preload-113000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 04:35:00.277455   18146 addons.go:231] Setting addon metrics-server=true in "no-preload-113000"
	W1002 04:35:00.277491   18146 addons.go:240] addon metrics-server should already be in state true
	I1002 04:35:00.277637   18146 host.go:66] Checking if "no-preload-113000" exists ...
	I1002 04:35:00.277704   18146 host.go:66] Checking if "no-preload-113000" exists ...
	I1002 04:35:00.278084   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.278146   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.278149   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.278174   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.278177   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.278230   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.278341   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.278374   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.287480   18146 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-113000" context rescaled to 1 replicas
	I1002 04:35:00.287524   18146 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.70.70 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 04:35:00.326931   18146 out.go:177] * Verifying Kubernetes components...
	I1002 04:35:00.290741   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64125
	I1002 04:35:00.291263   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64126
	I1002 04:35:00.294274   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64127
	I1002 04:35:00.401120   18146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 04:35:00.294885   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64128
	I1002 04:35:00.327352   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.401776   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.401799   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.401807   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.401961   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.401975   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.402234   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.402248   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.402317   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.402349   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.402363   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.402368   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.402397   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.402541   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.402686   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.402697   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.402863   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:35:00.402923   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.402942   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.402995   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:35:00.403054   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.403103   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.403113   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.403125   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:35:00.403141   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.409611   18146 addons.go:231] Setting addon default-storageclass=true in "no-preload-113000"
	W1002 04:35:00.409634   18146 addons.go:240] addon default-storageclass should already be in state true
	I1002 04:35:00.409655   18146 host.go:66] Checking if "no-preload-113000" exists ...
	I1002 04:35:00.409985   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.410021   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.414584   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64133
	I1002 04:35:00.414591   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64134
	I1002 04:35:00.415171   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.415236   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.415767   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.415784   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.415935   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.415955   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.416118   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.416274   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.416292   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:35:00.416418   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:35:00.416414   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:35:00.416492   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:35:00.416553   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:35:00.416661   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:35:00.418107   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:35:00.418133   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:35:00.418341   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64137
	I1002 04:35:00.455855   18146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 04:35:00.418869   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.421036   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64139
	I1002 04:35:00.450357   18146 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 04:35:00.450357   18146 node_ready.go:35] waiting up to 6m0s for node "no-preload-113000" to be "Ready" ...
	I1002 04:35:00.493045   18146 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 04:35:00.530174   18146 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1002 04:35:00.530309   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 04:35:00.531148   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.531337   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.588972   18146 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1002 04:35:00.589007   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.610017   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 04:35:00.610035   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 04:35:00.589003   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:35:00.610061   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:35:00.589562   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.610116   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.610351   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:35:00.610411   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:35:00.610622   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:35:00.610653   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.610697   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:35:00.610722   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.610841   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:35:00.610966   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:35:00.611001   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:35:00.611045   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:35:00.611196   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:35:00.611250   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:35:00.611311   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:35:00.611490   18146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:35:00.611518   18146 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:35:00.613991   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:35:00.651869   18146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1002 04:34:58.490396   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 04:34:58.610431   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHHostname
	I1002 04:34:58.552316   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1002 04:34:58.610436   17532 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 04:34:58.610449   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1002 04:34:58.610454   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 04:34:58.552620   17532 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:34:58.610487   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHHostname
	I1002 04:34:58.610462   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHHostname
	I1002 04:34:58.610673   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHPort
	I1002 04:34:58.610749   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHPort
	I1002 04:34:58.610764   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetState
	I1002 04:34:58.610764   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHPort
	I1002 04:34:58.610845   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHKeyPath
	I1002 04:34:58.610930   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:34:58.610966   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHUsername
	I1002 04:34:58.610980   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHKeyPath
	I1002 04:34:58.610998   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHKeyPath
	I1002 04:34:58.611085   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | hyperkit pid from json: 17543
	I1002 04:34:58.611143   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHUsername
	I1002 04:34:58.611160   17532 sshutil.go:53] new ssh client: &{IP:192.168.70.68 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/old-k8s-version-150000/id_rsa Username:docker}
	I1002 04:34:58.611230   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHUsername
	I1002 04:34:58.611252   17532 sshutil.go:53] new ssh client: &{IP:192.168.70.68 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/old-k8s-version-150000/id_rsa Username:docker}
	I1002 04:34:58.611353   17532 sshutil.go:53] new ssh client: &{IP:192.168.70.68 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/old-k8s-version-150000/id_rsa Username:docker}
	I1002 04:34:58.612555   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .DriverName
	I1002 04:34:58.612779   17532 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 04:34:58.612787   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 04:34:58.612797   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHHostname
	I1002 04:34:58.612904   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHPort
	I1002 04:34:58.613015   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHKeyPath
	I1002 04:34:58.613118   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .GetSSHUsername
	I1002 04:34:58.613234   17532 sshutil.go:53] new ssh client: &{IP:192.168.70.68 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/old-k8s-version-150000/id_rsa Username:docker}
	I1002 04:34:58.620934   17532 node_ready.go:49] node "old-k8s-version-150000" has status "Ready":"True"
	I1002 04:34:58.620948   17532 node_ready.go:38] duration metric: took 130.595104ms waiting for node "old-k8s-version-150000" to be "Ready" ...
	I1002 04:34:58.620955   17532 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 04:34:58.628590   17532 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace to be "Ready" ...
	I1002 04:34:58.823297   17532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 04:34:58.967452   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 04:34:58.967467   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 04:34:59.011560   17532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 04:34:59.040811   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 04:34:59.040823   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 04:34:59.043536   17532 start.go:923] {"host.minikube.internal": 192.168.70.1} host record injected into CoreDNS's ConfigMap
	I1002 04:34:59.057711   17532 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 04:34:59.057723   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 04:34:59.104709   17532 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 04:34:59.104721   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 04:34:59.118476   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 04:34:59.118505   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 04:34:59.213257   17532 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 04:34:59.213273   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 04:34:59.246230   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 04:34:59.246244   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 04:34:59.278505   17532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 04:34:59.285422   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 04:34:59.285435   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 04:34:59.342580   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 04:34:59.342594   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 04:34:59.426485   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:34:59.426501   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:34:59.426672   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:34:59.426682   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:34:59.426692   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:34:59.426700   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:34:59.426705   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:34:59.426834   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:34:59.426845   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:34:59.426847   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:34:59.439516   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 04:34:59.439529   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 04:34:59.445815   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:34:59.445833   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:34:59.446022   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:34:59.446029   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:34:59.446042   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:34:59.464619   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 04:34:59.464631   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 04:34:59.558725   17532 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 04:34:59.558738   17532 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 04:34:59.560958   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:34:59.560971   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:34:59.561133   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:34:59.561133   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:34:59.561144   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:34:59.561156   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:34:59.561165   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:34:59.561298   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:34:59.561309   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:34:59.561322   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:34:59.582365   17532 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 04:35:00.028560   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:00.028577   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:35:00.028727   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:00.028738   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:00.028744   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:00.028756   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:35:00.028754   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:35:00.028904   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:00.028915   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:00.028916   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:35:00.028922   17532 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-150000"
	I1002 04:35:00.658735   17532 pod_ready.go:102] pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:00.703982   17532 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.121557417s)
	I1002 04:35:00.704023   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:00.704037   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:35:00.704231   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:35:00.704275   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:00.704288   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:00.704301   17532 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:00.704310   17532 main.go:141] libmachine: (old-k8s-version-150000) Calling .Close
	I1002 04:35:00.704449   17532 main.go:141] libmachine: (old-k8s-version-150000) DBG | Closing plugin on server side
	I1002 04:35:00.704452   17532 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:00.704462   17532 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:00.743699   17532 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-150000 addons enable metrics-server	
	
	
	I1002 04:35:00.785913   17532 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1002 04:35:00.843898   17532 addons.go:502] enable addons completed in 2.625369212s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1002 04:35:00.621390   18146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:64143
	I1002 04:35:00.669680   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1002 04:35:00.671359   18146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 04:35:00.672931   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1002 04:35:00.672954   18146 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 04:35:00.672964   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 04:35:00.672980   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:35:00.673172   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:35:00.673298   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:35:00.673391   18146 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:35:00.673444   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:35:00.673562   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:35:00.673850   18146 main.go:141] libmachine: Using API Version  1
	I1002 04:35:00.673881   18146 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:35:00.674134   18146 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:35:00.674242   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetState
	I1002 04:35:00.674326   18146 main.go:141] libmachine: (no-preload-113000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:35:00.674413   18146 main.go:141] libmachine: (no-preload-113000) DBG | hyperkit pid from json: 18157
	I1002 04:35:00.675628   18146 main.go:141] libmachine: (no-preload-113000) Calling .DriverName
	I1002 04:35:00.675801   18146 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 04:35:00.675808   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 04:35:00.675819   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHHostname
	I1002 04:35:00.675919   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHPort
	I1002 04:35:00.676045   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHKeyPath
	I1002 04:35:00.676195   18146 main.go:141] libmachine: (no-preload-113000) Calling .GetSSHUsername
	I1002 04:35:00.676297   18146 sshutil.go:53] new ssh client: &{IP:192.168.70.70 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/no-preload-113000/id_rsa Username:docker}
	I1002 04:35:00.713972   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1002 04:35:00.713984   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1002 04:35:00.729782   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1002 04:35:00.729795   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1002 04:35:00.746593   18146 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 04:35:00.746603   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1002 04:35:00.760245   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1002 04:35:00.760256   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1002 04:35:00.772433   18146 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 04:35:00.772445   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 04:35:00.785345   18146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 04:35:00.798675   18146 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 04:35:00.798703   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 04:35:00.816249   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1002 04:35:00.816263   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1002 04:35:00.844748   18146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 04:35:00.864161   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1002 04:35:00.864176   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1002 04:35:00.921621   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1002 04:35:00.921635   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1002 04:35:00.980286   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1002 04:35:00.980299   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1002 04:35:01.038204   18146 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 04:35:01.038217   18146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1002 04:35:01.083478   18146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1002 04:35:01.167825   18146 node_ready.go:49] node "no-preload-113000" has status "Ready":"True"
	I1002 04:35:01.167837   18146 node_ready.go:38] duration metric: took 637.634499ms waiting for node "no-preload-113000" to be "Ready" ...
	I1002 04:35:01.167844   18146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 04:35:01.172947   18146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:02.240421   18146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.455056455s)
	I1002 04:35:02.240450   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240472   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240498   18146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.395729974s)
	I1002 04:35:02.240503   18146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.567586834s)
	I1002 04:35:02.240529   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240537   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240536   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240569   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240687   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.240703   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.240718   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240729   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240809   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.240821   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.240834   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240841   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240872   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.240885   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.240906   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.240925   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.240933   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.240971   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.240999   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.241045   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.241112   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.241122   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.241130   18146 addons.go:467] Verifying addon metrics-server=true in "no-preload-113000"
	I1002 04:35:02.241157   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.241221   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.241265   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.241277   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.247325   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.247338   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.247543   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.247558   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.247544   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.620473   18146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.536953325s)
	I1002 04:35:02.620498   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.620507   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.620670   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.620679   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.620687   18146 main.go:141] libmachine: Making call to close driver server
	I1002 04:35:02.620692   18146 main.go:141] libmachine: (no-preload-113000) Calling .Close
	I1002 04:35:02.620694   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.620862   18146 main.go:141] libmachine: Successfully made call to close driver server
	I1002 04:35:02.620863   18146 main.go:141] libmachine: (no-preload-113000) DBG | Closing plugin on server side
	I1002 04:35:02.620871   18146 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 04:35:02.644635   18146 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-113000 addons enable metrics-server	
	
	
	I1002 04:35:02.717802   18146 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1002 04:35:02.792492   18146 addons.go:502] enable addons completed in 2.515448518s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1002 04:35:03.373492   18146 pod_ready.go:102] pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:03.157071   17532 pod_ready.go:102] pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:05.157846   17532 pod_ready.go:102] pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:05.871760   18146 pod_ready.go:102] pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:06.871155   18146 pod_ready.go:92] pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:06.871167   18146 pod_ready.go:81] duration metric: took 5.698041144s waiting for pod "coredns-5dd5756b68-ck4h8" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:06.871174   18146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:07.656430   17532 pod_ready.go:102] pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:09.157470   17532 pod_ready.go:92] pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:09.157482   17532 pod_ready.go:81] duration metric: took 10.52860269s waiting for pod "coredns-5644d7b6d9-5b82n" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:09.157489   17532 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-55bz7" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:09.160930   17532 pod_ready.go:92] pod "kube-proxy-55bz7" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:09.160938   17532 pod_ready.go:81] duration metric: took 3.445379ms waiting for pod "kube-proxy-55bz7" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:09.160943   17532 pod_ready.go:38] duration metric: took 10.539702608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 04:35:09.160974   17532 api_server.go:52] waiting for apiserver process to appear ...
	I1002 04:35:09.161027   17532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 04:35:09.172157   17532 api_server.go:72] duration metric: took 10.922915991s to wait for apiserver process to appear ...
	I1002 04:35:09.172170   17532 api_server.go:88] waiting for apiserver healthz status ...
	I1002 04:35:09.172180   17532 api_server.go:253] Checking apiserver healthz at https://192.168.70.68:8443/healthz ...
	I1002 04:35:09.177406   17532 api_server.go:279] https://192.168.70.68:8443/healthz returned 200:
	ok
	I1002 04:35:09.177976   17532 api_server.go:141] control plane version: v1.16.0
	I1002 04:35:09.177988   17532 api_server.go:131] duration metric: took 5.813432ms to wait for apiserver health ...
	I1002 04:35:09.177994   17532 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 04:35:09.180506   17532 system_pods.go:59] 4 kube-system pods found
	I1002 04:35:09.180521   17532 system_pods.go:61] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:09.180525   17532 system_pods.go:61] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:09.180532   17532 system_pods.go:61] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:09.180537   17532 system_pods.go:61] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:09.180544   17532 system_pods.go:74] duration metric: took 2.546097ms to wait for pod list to return data ...
	I1002 04:35:09.180550   17532 default_sa.go:34] waiting for default service account to be created ...
	I1002 04:35:09.182508   17532 default_sa.go:45] found service account: "default"
	I1002 04:35:09.182521   17532 default_sa.go:55] duration metric: took 1.965968ms for default service account to be created ...
	I1002 04:35:09.182530   17532 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 04:35:09.185584   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:09.185597   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:09.185601   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:09.185606   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:09.185611   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:09.185624   17532 retry.go:31] will retry after 302.625093ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:09.492374   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:09.492389   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:09.492396   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:09.492401   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:09.492407   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:09.492417   17532 retry.go:31] will retry after 308.655336ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:09.804674   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:09.804688   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:09.804693   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:09.804700   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:09.804705   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:09.804715   17532 retry.go:31] will retry after 461.030441ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:10.270271   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:10.270286   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:10.270291   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:10.270296   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:10.270307   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:10.270317   17532 retry.go:31] will retry after 417.231224ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:10.691892   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:10.691906   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:10.691911   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:10.691915   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:10.691941   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:10.691951   17532 retry.go:31] will retry after 744.815434ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:11.440829   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:11.440851   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:11.440858   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:11.440874   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:11.440882   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:11.440895   17532 retry.go:31] will retry after 669.74408ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:12.114174   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:12.114192   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:12.114198   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:12.114206   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:12.114210   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:12.114224   17532 retry.go:31] will retry after 1.108357609s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:08.883745   18146 pod_ready.go:102] pod "etcd-no-preload-113000" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:10.883964   18146 pod_ready.go:102] pod "etcd-no-preload-113000" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:13.384643   18146 pod_ready.go:102] pod "etcd-no-preload-113000" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:13.225715   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:13.225730   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:13.225734   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:13.225740   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:13.225744   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:13.225752   17532 retry.go:31] will retry after 915.479869ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:14.144036   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:14.144050   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:14.144054   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:14.144059   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:14.144073   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:14.144082   17532 retry.go:31] will retry after 1.419428408s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:15.567696   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:15.567710   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:15.567714   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:15.567719   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:15.567724   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:15.567755   17532 retry.go:31] will retry after 1.680228048s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:13.883584   18146 pod_ready.go:92] pod "etcd-no-preload-113000" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:13.883596   18146 pod_ready.go:81] duration metric: took 7.012153657s waiting for pod "etcd-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.883603   18146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.888142   18146 pod_ready.go:92] pod "kube-apiserver-no-preload-113000" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:13.888152   18146 pod_ready.go:81] duration metric: took 4.544163ms waiting for pod "kube-apiserver-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.888159   18146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.891730   18146 pod_ready.go:92] pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:13.891739   18146 pod_ready.go:81] duration metric: took 3.574992ms waiting for pod "kube-controller-manager-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.891746   18146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ngk77" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.895074   18146 pod_ready.go:92] pod "kube-proxy-ngk77" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:13.895082   18146 pod_ready.go:81] duration metric: took 3.332208ms waiting for pod "kube-proxy-ngk77" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.895088   18146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.898221   18146 pod_ready.go:92] pod "kube-scheduler-no-preload-113000" in "kube-system" namespace has status "Ready":"True"
	I1002 04:35:13.898229   18146 pod_ready.go:81] duration metric: took 3.137104ms waiting for pod "kube-scheduler-no-preload-113000" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:13.898235   18146 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace to be "Ready" ...
	I1002 04:35:16.187692   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:17.252745   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:17.252759   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:17.252763   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:17.252768   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:17.252773   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:17.252782   17532 retry.go:31] will retry after 2.103656061s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:19.359374   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:19.359387   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:19.359392   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:19.359399   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:19.359403   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:19.359413   17532 retry.go:31] will retry after 2.486046532s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:21.849632   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:21.849645   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:21.849649   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:21.849654   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:21.849659   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:21.849668   17532 retry.go:31] will retry after 3.958614474s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:18.686262   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:20.688514   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:22.689555   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:25.812760   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:25.812774   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:25.812778   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:25.812783   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:25.812788   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:25.812796   17532 retry.go:31] will retry after 5.344105965s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:25.185685   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:27.186280   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:31.161586   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:31.161604   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:31.161611   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:31.161618   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:31.161624   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:31.161635   17532 retry.go:31] will retry after 7.033521978s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:29.188580   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:31.692033   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:34.186323   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:36.188939   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:38.198180   17532 system_pods.go:86] 4 kube-system pods found
	I1002 04:35:38.198193   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:38.198198   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:38.198203   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:38.198208   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:38.198218   17532 retry.go:31] will retry after 5.882794014s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:38.689353   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:40.689813   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:43.188690   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:44.087650   17532 system_pods.go:86] 5 kube-system pods found
	I1002 04:35:44.087665   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:44.087669   17532 system_pods.go:89] "etcd-old-k8s-version-150000" [7c5cecef-763a-4fde-a40d-32f396358eb1] Running
	I1002 04:35:44.087672   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:44.087679   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:44.087683   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:44.087692   17532 retry.go:31] will retry after 8.674154964s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 04:35:45.687584   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:47.688187   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:49.688327   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:52.187957   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:52.765559   17532 system_pods.go:86] 7 kube-system pods found
	I1002 04:35:52.765573   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:35:52.765577   17532 system_pods.go:89] "etcd-old-k8s-version-150000" [7c5cecef-763a-4fde-a40d-32f396358eb1] Running
	I1002 04:35:52.765581   17532 system_pods.go:89] "kube-apiserver-old-k8s-version-150000" [120e7487-bcb0-4a6b-b493-11bd2eda3d64] Running
	I1002 04:35:52.765584   17532 system_pods.go:89] "kube-controller-manager-old-k8s-version-150000" [8793ba80-246e-4783-8b24-f13a0f41665f] Running
	I1002 04:35:52.765588   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:35:52.765592   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:35:52.765598   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:35:52.765606   17532 retry.go:31] will retry after 10.651122239s: missing components: kube-scheduler
	I1002 04:35:54.188182   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:35:56.687655   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:03.421011   17532 system_pods.go:86] 8 kube-system pods found
	I1002 04:36:03.421024   17532 system_pods.go:89] "coredns-5644d7b6d9-5b82n" [7f742117-42ae-42e2-ade5-8eb93e0b65d3] Running
	I1002 04:36:03.421028   17532 system_pods.go:89] "etcd-old-k8s-version-150000" [7c5cecef-763a-4fde-a40d-32f396358eb1] Running
	I1002 04:36:03.421032   17532 system_pods.go:89] "kube-apiserver-old-k8s-version-150000" [120e7487-bcb0-4a6b-b493-11bd2eda3d64] Running
	I1002 04:36:03.421036   17532 system_pods.go:89] "kube-controller-manager-old-k8s-version-150000" [8793ba80-246e-4783-8b24-f13a0f41665f] Running
	I1002 04:36:03.421039   17532 system_pods.go:89] "kube-proxy-55bz7" [c28c764b-7062-4a85-9ca8-1ab496030222] Running
	I1002 04:36:03.421042   17532 system_pods.go:89] "kube-scheduler-old-k8s-version-150000" [a267e01d-33af-4cdc-bfbc-12316ef115aa] Running
	I1002 04:36:03.421048   17532 system_pods.go:89] "metrics-server-74d5856cc6-7lr8g" [e4b33674-5b0b-4e4d-afe5-c81a950dcb7a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 04:36:03.421055   17532 system_pods.go:89] "storage-provisioner" [21f81476-7b13-40bd-bc9c-c5269347bb6b] Running
	I1002 04:36:03.421062   17532 system_pods.go:126] duration metric: took 54.237570335s to wait for k8s-apps to be running ...
	I1002 04:36:03.421067   17532 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 04:36:03.421115   17532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 04:36:03.430072   17532 system_svc.go:56] duration metric: took 9.000016ms WaitForService to wait for kubelet.
	I1002 04:36:03.430084   17532 kubeadm.go:581] duration metric: took 1m5.179891231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 04:36:03.430096   17532 node_conditions.go:102] verifying NodePressure condition ...
	I1002 04:36:03.432054   17532 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 04:36:03.432065   17532 node_conditions.go:123] node cpu capacity is 2
	I1002 04:36:03.432072   17532 node_conditions.go:105] duration metric: took 1.971761ms to run NodePressure ...
	I1002 04:36:03.432079   17532 start.go:228] waiting for startup goroutines ...
	I1002 04:36:03.432085   17532 start.go:233] waiting for cluster config update ...
	I1002 04:36:03.432093   17532 start.go:242] writing updated cluster config ...
	I1002 04:36:03.433084   17532 ssh_runner.go:195] Run: rm -f paused
	I1002 04:36:03.471080   17532 start.go:600] kubectl: 1.27.2, cluster: 1.16.0 (minor skew: 11)
	I1002 04:35:58.690047   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:01.189553   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:03.492893   17532 out.go:177] 
	W1002 04:36:03.535642   17532 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 04:36:03.557595   17532 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 04:36:03.601675   17532 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-150000" cluster and "default" namespace by default
	I1002 04:36:03.687324   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:05.688926   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:08.190007   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:10.689006   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	I1002 04:36:13.188479   18146 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ls7vw" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 11:28:20 UTC, ends at Mon 2023-10-02 11:36:14 UTC. --
	Oct 02 11:35:15 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:15.351958531Z" level=warning msg="cleaning up after shim disconnected" id=46e66e10f8dde849a275b20ec9aa4274f905aabb1115a972ba8fbdbaa248ee57 namespace=moby
	Oct 02 11:35:15 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:15.351967792Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 11:35:15 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:15.352294548Z" level=info msg="ignoring event" container=46e66e10f8dde849a275b20ec9aa4274f905aabb1115a972ba8fbdbaa248ee57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 11:35:27 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:27.387456593Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:27 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:27.387493351Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:27 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:27.388384184Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.427378669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.427653408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.427717458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.427853980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.700446048Z" level=info msg="shim disconnected" id=c3561a80db1ed995989f8abee3fc295984fef610628bf722128b7842a194e097 namespace=moby
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.700511715Z" level=warning msg="cleaning up after shim disconnected" id=c3561a80db1ed995989f8abee3fc295984fef610628bf722128b7842a194e097 namespace=moby
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:35:40.700521011Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 11:35:40 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:40.701175625Z" level=info msg="ignoring event" container=c3561a80db1ed995989f8abee3fc295984fef610628bf722128b7842a194e097 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 11:35:54 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:54.388142829Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:54 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:54.388162124Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:54 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:35:54.390173024Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.419989150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.420111191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.420148119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.420166194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1187]: time="2023-10-02T11:36:10.690669648Z" level=info msg="ignoring event" container=8cbb664bd2b11059a5798ca0ae17fcdea934d5a20583b11315aab4826aace079 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.691437846Z" level=info msg="shim disconnected" id=8cbb664bd2b11059a5798ca0ae17fcdea934d5a20583b11315aab4826aace079 namespace=moby
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.691738985Z" level=warning msg="cleaning up after shim disconnected" id=8cbb664bd2b11059a5798ca0ae17fcdea934d5a20583b11315aab4826aace079 namespace=moby
	Oct 02 11:36:10 old-k8s-version-150000 dockerd[1193]: time="2023-10-02T11:36:10.691787212Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* time="2023-10-02T11:36:15Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                     PORTS     NAMES
	8cbb664bd2b1   a90209bb39e3             "nginx -g 'daemon of…"   5 seconds ago        Exited (1) 4 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard_8ac82831-ae3a-42ac-bef6-e04d3522dbeb_3
	bb567e70a9e9   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                    k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-dmjwb_kubernetes-dashboard_0a37a198-e551-4e63-9e7a-42bf11865873_0
	e104c5296d44   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_metrics-server-74d5856cc6-7lr8g_kube-system_e4b33674-5b0b-4e4d-afe5-c81a950dcb7a_0
	204bae62d5d0   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kubernetes-dashboard-84b68f675b-dmjwb_kubernetes-dashboard_0a37a198-e551-4e63-9e7a-42bf11865873_0
	f156ff3ef6d9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard_8ac82831-ae3a-42ac-bef6-e04d3522dbeb_0
	54b7276251df   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                    k8s_storage-provisioner_storage-provisioner_kube-system_21f81476-7b13-40bd-bc9c-c5269347bb6b_0
	8d0f44b6dc2f   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_storage-provisioner_kube-system_21f81476-7b13-40bd-bc9c-c5269347bb6b_0
	f4240284f59f   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                    k8s_coredns_coredns-5644d7b6d9-5b82n_kube-system_7f742117-42ae-42e2-ade5-8eb93e0b65d3_0
	266ab10453e2   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                    k8s_kube-proxy_kube-proxy-55bz7_kube-system_c28c764b-7062-4a85-9ca8-1ab496030222_0
	88f35423d40c   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_coredns-5644d7b6d9-5b82n_kube-system_7f742117-42ae-42e2-ade5-8eb93e0b65d3_0
	34d06d91eff6   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-proxy-55bz7_kube-system_c28c764b-7062-4a85-9ca8-1ab496030222_0
	001e6499da51   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                    k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-150000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	9b396600d78b   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                    k8s_etcd_etcd-old-k8s-version-150000_kube-system_bb95cef294429464340698716fe400bb_0
	b558a0323a9b   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                    k8s_kube-apiserver_kube-apiserver-old-k8s-version-150000_kube-system_d4dd2a562e4a95b2140e074db9d68d54_0
	0de3b73d8f50   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                    k8s_kube-scheduler_kube-scheduler-old-k8s-version-150000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	cb0d9b7eeca1   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-controller-manager-old-k8s-version-150000_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	9b8199547495   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-apiserver-old-k8s-version-150000_kube-system_d4dd2a562e4a95b2140e074db9d68d54_0
	60cd61961daa   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_etcd-old-k8s-version-150000_kube-system_bb95cef294429464340698716fe400bb_0
	6c4be1d90678   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-scheduler-old-k8s-version-150000_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	
	* 
	* ==> coredns [f4240284f59f] <==
	* .:53
	2023-10-02T11:34:59.537Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-02T11:34:59.537Z [INFO] CoreDNS-1.6.2
	2023-10-02T11:34:59.537Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-02T11:35:35.942Z [INFO] plugin/reload: Running configuration MD5 = 7c89cb30fdf15978e415c91b2188e89e
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-150000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-150000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45957ed538272972541ab48cdf2c4b323d7f5c18
	                    minikube.k8s.io/name=old-k8s-version-150000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T04_34_41_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 11:34:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 11:35:37 +0000   Mon, 02 Oct 2023 11:34:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 11:35:37 +0000   Mon, 02 Oct 2023 11:34:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 11:35:37 +0000   Mon, 02 Oct 2023 11:34:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 11:35:37 +0000   Mon, 02 Oct 2023 11:34:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.70.68
	  Hostname:    old-k8s-version-150000
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2166052Ki
	 pods:               110
	System Info:
	 Machine ID:                 6494808a27ce4e54b16978dc8dc46abd
	 System UUID:                64d411ee-0000-0000-88c4-149d997cd0f1
	 Boot ID:                    c2a79988-6f9a-4801-9cc5-57f25f03be1a
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-5b82n                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     77s
	  kube-system                etcd-old-k8s-version-150000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                kube-apiserver-old-k8s-version-150000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                kube-controller-manager-old-k8s-version-150000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                kube-proxy-55bz7                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                kube-scheduler-old-k8s-version-150000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                metrics-server-74d5856cc6-7lr8g                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-5sjfw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-dmjwb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet, old-k8s-version-150000     Node old-k8s-version-150000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet, old-k8s-version-150000     Node old-k8s-version-150000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet, old-k8s-version-150000     Node old-k8s-version-150000 status is now: NodeHasSufficientPID
	  Normal  Starting                 76s                  kube-proxy, old-k8s-version-150000  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.028794] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +5.021502] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007145] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.317430] systemd-fstab-generator[124]: Ignoring "noauto" for root device
	[  +0.044335] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.940571] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.117104] systemd-fstab-generator[520]: Ignoring "noauto" for root device
	[  +0.086161] systemd-fstab-generator[531]: Ignoring "noauto" for root device
	[  +0.746577] systemd-fstab-generator[787]: Ignoring "noauto" for root device
	[  +0.227098] systemd-fstab-generator[826]: Ignoring "noauto" for root device
	[  +0.088025] systemd-fstab-generator[837]: Ignoring "noauto" for root device
	[  +0.093125] systemd-fstab-generator[850]: Ignoring "noauto" for root device
	[  +6.360131] systemd-fstab-generator[1159]: Ignoring "noauto" for root device
	[  +1.231469] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.148342] systemd-fstab-generator[1613]: Ignoring "noauto" for root device
	[Oct 2 11:29] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.096892] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +22.132134] kauditd_printk_skb: 5 callbacks suppressed
	[Oct 2 11:34] systemd-fstab-generator[6961]: Ignoring "noauto" for root device
	[Oct 2 11:35] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +0.752427] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [9b396600d78b] <==
	* 2023-10-02 11:34:34.311760 I | raft: 7410ed26f2a886f became follower at term 1
	2023-10-02 11:34:34.345144 W | auth: simple token is not cryptographically signed
	2023-10-02 11:34:34.351580 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-02 11:34:34.353341 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 11:34:34.353577 I | embed: listening for metrics on http://192.168.70.68:2381
	2023-10-02 11:34:34.353992 I | etcdserver: 7410ed26f2a886f as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-02 11:34:34.354050 I | etcdserver/membership: added member 7410ed26f2a886f [https://192.168.70.68:2380] to cluster d9c9f5c6c4c232c1
	2023-10-02 11:34:34.354100 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-02 11:34:34.512437 I | raft: 7410ed26f2a886f is starting a new election at term 1
	2023-10-02 11:34:34.512464 I | raft: 7410ed26f2a886f became candidate at term 2
	2023-10-02 11:34:34.512483 I | raft: 7410ed26f2a886f received MsgVoteResp from 7410ed26f2a886f at term 2
	2023-10-02 11:34:34.512498 I | raft: 7410ed26f2a886f became leader at term 2
	2023-10-02 11:34:34.512505 I | raft: raft.node: 7410ed26f2a886f elected leader 7410ed26f2a886f at term 2
	2023-10-02 11:34:34.512727 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-02 11:34:34.513484 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-02 11:34:34.513558 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-02 11:34:34.513585 I | etcdserver: published {Name:old-k8s-version-150000 ClientURLs:[https://192.168.70.68:2379]} to cluster d9c9f5c6c4c232c1
	2023-10-02 11:34:34.513718 I | embed: ready to serve client requests
	2023-10-02 11:34:34.514851 I | embed: serving client requests on 192.168.70.68:2379
	2023-10-02 11:34:34.520410 I | embed: ready to serve client requests
	2023-10-02 11:34:34.521013 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 11:34:58.807506 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:1256" took too long (209.822116ms) to execute
	2023-10-02 11:34:58.811090 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-150000\" " with result "range_response_count:1 size:2997" took too long (147.430688ms) to execute
	2023-10-02 11:34:58.811668 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-150000\" " with result "range_response_count:1 size:2997" took too long (126.494324ms) to execute
	2023-10-02 11:35:00.595856 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:680" took too long (112.472471ms) to execute
	
	* 
	* ==> kernel <==
	*  11:36:15 up 8 min,  0 users,  load average: 0.57, 0.43, 0.20
	Linux old-k8s-version-150000 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b558a0323a9b] <==
	* I1002 11:34:38.228284       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 11:34:38.228374       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 11:34:38.232608       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1002 11:34:38.242093       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1002 11:34:38.242124       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1002 11:34:40.013352       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 11:34:40.294756       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 11:34:40.589486       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.70.68]
	I1002 11:34:40.590245       1 controller.go:606] quota admission added evaluator for: endpoints
	I1002 11:34:41.505120       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1002 11:34:41.681286       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1002 11:34:41.982845       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1002 11:34:58.294158       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1002 11:34:58.320170       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1002 11:34:58.494074       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1002 11:35:01.512175       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 11:35:01.512433       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 11:35:01.512693       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 11:35:01.512857       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 11:36:01.513332       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 11:36:01.513630       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 11:36:01.513847       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 11:36:01.513950       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [001e6499da51] <==
	* E1002 11:35:00.260018       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.264601       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.264627       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"255b621e-8b88-4951-a6e2-ee2a70aaacd1", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.267469       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.267642       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"255b621e-8b88-4951-a6e2-ee2a70aaacd1", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.302246       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"e3c5051f-d9c7-44aa-94e3-38d0f92b8082", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	E1002 11:35:00.353567       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.353807       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"255b621e-8b88-4951-a6e2-ee2a70aaacd1", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.353840       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"25ad3835-2df9-457c-8309-fcd8b46de138", APIVersion:"apps/v1", ResourceVersion:"409", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.409313       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.432385       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.432433       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"255b621e-8b88-4951-a6e2-ee2a70aaacd1", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.435522       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.435539       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"25ad3835-2df9-457c-8309-fcd8b46de138", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.458296       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.458446       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"25ad3835-2df9-457c-8309-fcd8b46de138", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 11:35:00.466477       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:00.466575       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"25ad3835-2df9-457c-8309-fcd8b46de138", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 11:35:01.041003       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"4d8476f7-8288-4934-b8f5-46a995e1f2a2", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-7lr8g
	I1002 11:35:01.604906       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"255b621e-8b88-4951-a6e2-ee2a70aaacd1", APIVersion:"apps/v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-5sjfw
	I1002 11:35:01.621773       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"25ad3835-2df9-457c-8309-fcd8b46de138", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-dmjwb
	E1002 11:35:28.723483       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 11:35:30.460484       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 11:35:58.976237       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 11:36:02.462334       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [266ab10453e2] <==
	* W1002 11:34:59.403353       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1002 11:34:59.414874       1 node.go:135] Successfully retrieved node IP: 192.168.70.68
	I1002 11:34:59.414913       1 server_others.go:149] Using iptables Proxier.
	I1002 11:34:59.415253       1 server.go:529] Version: v1.16.0
	I1002 11:34:59.418081       1 config.go:313] Starting service config controller
	I1002 11:34:59.418114       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1002 11:34:59.418241       1 config.go:131] Starting endpoints config controller
	I1002 11:34:59.418258       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1002 11:34:59.520718       1 shared_informer.go:204] Caches are synced for service config 
	I1002 11:34:59.520837       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [0de3b73d8f50] <==
	* W1002 11:34:37.314823       1 authentication.go:79] Authentication is disabled
	I1002 11:34:37.315013       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1002 11:34:37.315891       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1002 11:34:37.331577       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:34:37.331635       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:34:37.333475       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:34:37.336665       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:34:37.336771       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:34:37.336895       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:34:37.336903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:34:37.337080       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:34:37.339217       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:34:37.339294       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:34:37.339399       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 11:34:38.332784       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 11:34:38.337121       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 11:34:38.338105       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 11:34:38.340131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 11:34:38.342699       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 11:34:38.344208       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 11:34:38.346570       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 11:34:38.347291       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 11:34:38.348037       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 11:34:38.349275       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 11:34:38.349948       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 11:28:20 UTC, ends at Mon 2023-10-02 11:36:16 UTC. --
	Oct 02 11:35:15 old-k8s-version-150000 kubelet[6967]: W1002 11:35:15.374259    6967 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod8ac82831-ae3a-42ac-bef6-e04d3522dbeb/46e66e10f8dde849a275b20ec9aa4274f905aabb1115a972ba8fbdbaa248ee57": none of the resources are being tracked.
	Oct 02 11:35:16 old-k8s-version-150000 kubelet[6967]: W1002 11:35:16.026869    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:35:16 old-k8s-version-150000 kubelet[6967]: E1002 11:35:16.030846    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:35:17 old-k8s-version-150000 kubelet[6967]: W1002 11:35:17.036886    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:35:17 old-k8s-version-150000 kubelet[6967]: E1002 11:35:17.040185    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:35:24 old-k8s-version-150000 kubelet[6967]: E1002 11:35:24.897972    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:35:27 old-k8s-version-150000 kubelet[6967]: E1002 11:35:27.388745    6967 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:27 old-k8s-version-150000 kubelet[6967]: E1002 11:35:27.389005    6967 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:27 old-k8s-version-150000 kubelet[6967]: E1002 11:35:27.389142    6967 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:27 old-k8s-version-150000 kubelet[6967]: E1002 11:35:27.389198    6967 pod_workers.go:191] Error syncing pod e4b33674-5b0b-4e4d-afe5-c81a950dcb7a ("metrics-server-74d5856cc6-7lr8g_kube-system(e4b33674-5b0b-4e4d-afe5-c81a950dcb7a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:41 old-k8s-version-150000 kubelet[6967]: W1002 11:35:41.178916    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:35:41 old-k8s-version-150000 kubelet[6967]: E1002 11:35:41.182493    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:35:42 old-k8s-version-150000 kubelet[6967]: W1002 11:35:42.188505    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:35:42 old-k8s-version-150000 kubelet[6967]: E1002 11:35:42.384198    6967 pod_workers.go:191] Error syncing pod e4b33674-5b0b-4e4d-afe5-c81a950dcb7a ("metrics-server-74d5856cc6-7lr8g_kube-system(e4b33674-5b0b-4e4d-afe5-c81a950dcb7a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 11:35:44 old-k8s-version-150000 kubelet[6967]: E1002 11:35:44.895902    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:35:54 old-k8s-version-150000 kubelet[6967]: E1002 11:35:54.390605    6967 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:54 old-k8s-version-150000 kubelet[6967]: E1002 11:35:54.390684    6967 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:54 old-k8s-version-150000 kubelet[6967]: E1002 11:35:54.390716    6967 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host
	Oct 02 11:35:54 old-k8s-version-150000 kubelet[6967]: E1002 11:35:54.391097    6967 pod_workers.go:191] Error syncing pod e4b33674-5b0b-4e4d-afe5-c81a950dcb7a ("metrics-server-74d5856cc6-7lr8g_kube-system(e4b33674-5b0b-4e4d-afe5-c81a950dcb7a)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.70.1:53: no such host"
	Oct 02 11:35:59 old-k8s-version-150000 kubelet[6967]: E1002 11:35:59.383460    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:36:08 old-k8s-version-150000 kubelet[6967]: E1002 11:36:08.385654    6967 pod_workers.go:191] Error syncing pod e4b33674-5b0b-4e4d-afe5-c81a950dcb7a ("metrics-server-74d5856cc6-7lr8g_kube-system(e4b33674-5b0b-4e4d-afe5-c81a950dcb7a)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 11:36:11 old-k8s-version-150000 kubelet[6967]: W1002 11:36:11.361280    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:36:11 old-k8s-version-150000 kubelet[6967]: E1002 11:36:11.365364    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	Oct 02 11:36:12 old-k8s-version-150000 kubelet[6967]: W1002 11:36:12.370719    6967 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-5sjfw through plugin: invalid network status for
	Oct 02 11:36:14 old-k8s-version-150000 kubelet[6967]: E1002 11:36:14.896884    6967 pod_workers.go:191] Error syncing pod 8ac82831-ae3a-42ac-bef6-e04d3522dbeb ("dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-5sjfw_kubernetes-dashboard(8ac82831-ae3a-42ac-bef6-e04d3522dbeb)"
	
	* 
	* ==> kubernetes-dashboard [bb567e70a9e9] <==
	* 2023/10/02 11:35:07 Starting overwatch
	2023/10/02 11:35:07 Using namespace: kubernetes-dashboard
	2023/10/02 11:35:07 Using in-cluster config to connect to apiserver
	2023/10/02 11:35:07 Using secret token for csrf signing
	2023/10/02 11:35:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/02 11:35:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/02 11:35:07 Successful initial request to the apiserver, version: v1.16.0
	2023/10/02 11:35:07 Generating JWE encryption key
	2023/10/02 11:35:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/02 11:35:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/02 11:35:07 Initializing JWE encryption key from synchronized object
	2023/10/02 11:35:07 Creating in-cluster Sidecar client
	2023/10/02 11:35:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 11:35:07 Serving insecurely on HTTP port: 9090
	2023/10/02 11:35:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 11:36:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [54b7276251df] <==
	* I1002 11:35:00.485697       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 11:35:00.692657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 11:35:00.692712       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 11:35:00.789473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 11:35:00.803331       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150000_4440c4fa-053d-48e3-82a8-3aa503370219!
	I1002 11:35:00.803901       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f59b38b-19e8-4242-84d9-57e3d81f5840", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-150000_4440c4fa-053d-48e3-82a8-3aa503370219 became leader
	I1002 11:35:00.905447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-150000_4440c4fa-053d-48e3-82a8-3aa503370219!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-150000 -n old-k8s-version-150000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-150000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-7lr8g
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-150000 describe pod metrics-server-74d5856cc6-7lr8g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-150000 describe pod metrics-server-74d5856cc6-7lr8g: exit status 1 (50.151184ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-7lr8g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-150000 describe pod metrics-server-74d5856cc6-7lr8g: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.00s)

                                                
                                    

Test pass (285/309)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.31
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.4
10 TestDownloadOnly/v1.28.2/json-events 5.36
11 TestDownloadOnly/v1.28.2/preload-exists 0
14 TestDownloadOnly/v1.28.2/kubectl 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.33
16 TestDownloadOnly/DeleteAll 0.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
19 TestBinaryMirror 1.02
20 TestOffline 57.57
22 TestAddons/Setup 128.2
24 TestAddons/parallel/Registry 14.72
25 TestAddons/parallel/Ingress 20.09
26 TestAddons/parallel/InspektorGadget 10.49
27 TestAddons/parallel/MetricsServer 5.54
28 TestAddons/parallel/HelmTiller 10.49
30 TestAddons/parallel/CSI 38.25
31 TestAddons/parallel/Headlamp 11.96
32 TestAddons/parallel/CloudSpanner 5.35
33 TestAddons/parallel/LocalPath 51.93
36 TestAddons/serial/GCPAuth/Namespaces 0.1
37 TestAddons/StoppedEnableDisable 5.72
38 TestCertOptions 38.16
39 TestCertExpiration 246.21
40 TestDockerFlags 46.17
41 TestForceSystemdFlag 39.37
42 TestForceSystemdEnv 43.43
45 TestHyperKitDriverInstallOrUpdate 5.97
48 TestErrorSpam/setup 34.73
49 TestErrorSpam/start 1.46
50 TestErrorSpam/status 0.47
51 TestErrorSpam/pause 1.25
52 TestErrorSpam/unpause 1.28
53 TestErrorSpam/stop 3.62
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 49.47
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 38.64
60 TestFunctional/serial/KubeContext 0.03
61 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/CacheCmd/cache/add_remote 4.4
65 TestFunctional/serial/CacheCmd/cache/add_local 1.59
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
67 TestFunctional/serial/CacheCmd/cache/list 0.07
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
70 TestFunctional/serial/CacheCmd/cache/delete 0.13
71 TestFunctional/serial/MinikubeKubectlCmd 0.54
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
73 TestFunctional/serial/ExtraConfig 39.9
74 TestFunctional/serial/ComponentHealth 0.05
75 TestFunctional/serial/LogsCmd 3
76 TestFunctional/serial/LogsFileCmd 2.85
77 TestFunctional/serial/InvalidService 4.88
79 TestFunctional/parallel/ConfigCmd 0.41
80 TestFunctional/parallel/DashboardCmd 12.52
81 TestFunctional/parallel/DryRun 1.27
82 TestFunctional/parallel/InternationalLanguage 0.85
83 TestFunctional/parallel/StatusCmd 0.55
87 TestFunctional/parallel/ServiceCmdConnect 9.58
88 TestFunctional/parallel/AddonsCmd 0.24
89 TestFunctional/parallel/PersistentVolumeClaim 27.91
91 TestFunctional/parallel/SSHCmd 0.29
92 TestFunctional/parallel/CpCmd 0.66
93 TestFunctional/parallel/MySQL 28.49
94 TestFunctional/parallel/FileSync 0.24
95 TestFunctional/parallel/CertSync 1.15
99 TestFunctional/parallel/NodeLabels 0.08
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
103 TestFunctional/parallel/License 0.5
104 TestFunctional/parallel/Version/short 0.09
105 TestFunctional/parallel/Version/components 0.55
106 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
107 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
108 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
110 TestFunctional/parallel/ImageCommands/ImageBuild 2.24
111 TestFunctional/parallel/ImageCommands/Setup 2.85
112 TestFunctional/parallel/DockerEnv/bash 0.77
113 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
114 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
115 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
116 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.5
117 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.28
118 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.56
119 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.2
120 TestFunctional/parallel/ImageCommands/ImageRemove 0.35
121 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.3
122 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.29
123 TestFunctional/parallel/ServiceCmd/DeployApp 13.13
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
127 TestFunctional/parallel/ServiceCmd/List 0.24
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
132 TestFunctional/parallel/ServiceCmd/Format 0.25
133 TestFunctional/parallel/ServiceCmd/URL 0.24
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
141 TestFunctional/parallel/ProfileCmd/profile_list 0.32
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
143 TestFunctional/parallel/MountCmd/any-port 6.05
144 TestFunctional/parallel/MountCmd/specific-port 1.79
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.62
146 TestFunctional/delete_addon-resizer_images 0.14
147 TestFunctional/delete_my-image_image 0.05
148 TestFunctional/delete_minikube_cached_images 0.05
154 TestIngressAddonLegacy/StartLegacyK8sCluster 99.47
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.8
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.63
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 46.82
161 TestJSONOutput/start/Command 49.09
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.42
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.42
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 8.16
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.76
189 TestMainNoArgs 0.07
193 TestMountStart/serial/StartWithMountFirst 16.41
194 TestMountStart/serial/VerifyMountFirst 0.28
195 TestMountStart/serial/StartWithMountSecond 16.28
196 TestMountStart/serial/VerifyMountSecond 0.29
197 TestMountStart/serial/DeleteFirst 2.27
198 TestMountStart/serial/VerifyMountPostDelete 0.29
199 TestMountStart/serial/Stop 2.22
200 TestMountStart/serial/RestartStopped 16.58
201 TestMountStart/serial/VerifyMountPostStop 0.3
204 TestMultiNode/serial/FreshStart2Nodes 90.26
205 TestMultiNode/serial/DeployApp2Nodes 4.7
206 TestMultiNode/serial/PingHostFrom2Pods 0.8
207 TestMultiNode/serial/AddNode 32.81
208 TestMultiNode/serial/ProfileList 0.2
209 TestMultiNode/serial/CopyFile 5.12
210 TestMultiNode/serial/StopNode 2.68
211 TestMultiNode/serial/StartAfterStop 27.41
212 TestMultiNode/serial/RestartKeepsNodes 191.89
213 TestMultiNode/serial/DeleteNode 2.98
214 TestMultiNode/serial/StopMultiNode 16.48
215 TestMultiNode/serial/RestartMultiNode 128.32
216 TestMultiNode/serial/ValidateNameConflict 40.75
220 TestPreload 161.71
222 TestScheduledStopUnix 105.98
223 TestSkaffold 108.99
226 TestRunningBinaryUpgrade 155.18
228 TestKubernetesUpgrade 153.32
241 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.38
242 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.01
243 TestStoppedBinaryUpgrade/Setup 0.38
246 TestPause/serial/Start 50.19
247 TestStoppedBinaryUpgrade/MinikubeLogs 3.11
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
257 TestNoKubernetes/serial/StartWithK8s 40.95
258 TestNoKubernetes/serial/StartWithStopK8s 16.65
259 TestPause/serial/SecondStartNoReconfiguration 41.08
260 TestNoKubernetes/serial/Start 17.76
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
262 TestNoKubernetes/serial/ProfileList 0.47
263 TestNoKubernetes/serial/Stop 2.23
264 TestNoKubernetes/serial/StartNoArgs 15.34
265 TestPause/serial/Pause 0.52
266 TestPause/serial/VerifyStatus 0.15
267 TestPause/serial/Unpause 0.51
268 TestPause/serial/PauseAgain 0.57
269 TestPause/serial/DeletePaused 5.26
270 TestPause/serial/VerifyDeletedResources 0.24
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
273 TestNetworkPlugins/group/kindnet/Start 66.26
274 TestNetworkPlugins/group/calico/Start 69.54
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.14
277 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
278 TestNetworkPlugins/group/kindnet/DNS 0.12
279 TestNetworkPlugins/group/kindnet/Localhost 0.1
280 TestNetworkPlugins/group/kindnet/HairPin 0.11
281 TestNetworkPlugins/group/calico/ControllerPod 5.02
282 TestNetworkPlugins/group/custom-flannel/Start 58.95
283 TestNetworkPlugins/group/calico/KubeletFlags 0.15
284 TestNetworkPlugins/group/calico/NetCatPod 9.22
285 TestNetworkPlugins/group/calico/DNS 0.13
286 TestNetworkPlugins/group/calico/Localhost 0.11
287 TestNetworkPlugins/group/calico/HairPin 0.1
288 TestNetworkPlugins/group/false/Start 54.27
289 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
290 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.27
291 TestNetworkPlugins/group/custom-flannel/DNS 0.12
292 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
293 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
294 TestNetworkPlugins/group/false/KubeletFlags 0.15
295 TestNetworkPlugins/group/false/NetCatPod 10.2
296 TestNetworkPlugins/group/enable-default-cni/Start 49.23
297 TestNetworkPlugins/group/false/DNS 0.13
298 TestNetworkPlugins/group/false/Localhost 0.11
299 TestNetworkPlugins/group/false/HairPin 0.12
300 TestNetworkPlugins/group/flannel/Start 59.67
301 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
302 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.24
303 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
304 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
305 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
306 TestNetworkPlugins/group/bridge/Start 60.82
307 TestNetworkPlugins/group/flannel/ControllerPod 5.01
308 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
309 TestNetworkPlugins/group/flannel/NetCatPod 11.24
310 TestNetworkPlugins/group/flannel/DNS 0.12
311 TestNetworkPlugins/group/flannel/Localhost 0.1
312 TestNetworkPlugins/group/flannel/HairPin 0.1
313 TestNetworkPlugins/group/kubenet/Start 54.19
314 TestNetworkPlugins/group/bridge/KubeletFlags 0.14
315 TestNetworkPlugins/group/bridge/NetCatPod 9.21
316 TestNetworkPlugins/group/bridge/DNS 0.15
317 TestNetworkPlugins/group/bridge/Localhost 0.11
318 TestNetworkPlugins/group/bridge/HairPin 0.11
320 TestStartStop/group/old-k8s-version/serial/FirstStart 143.87
321 TestNetworkPlugins/group/kubenet/KubeletFlags 0.15
322 TestNetworkPlugins/group/kubenet/NetCatPod 10.22
323 TestNetworkPlugins/group/kubenet/DNS 0.13
324 TestNetworkPlugins/group/kubenet/Localhost 0.1
325 TestNetworkPlugins/group/kubenet/HairPin 0.11
327 TestStartStop/group/embed-certs/serial/FirstStart 51.02
328 TestStartStop/group/embed-certs/serial/DeployApp 8.28
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.82
330 TestStartStop/group/embed-certs/serial/Stop 8.23
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
332 TestStartStop/group/embed-certs/serial/SecondStart 299.2
333 TestStartStop/group/old-k8s-version/serial/DeployApp 9.32
334 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
335 TestStartStop/group/old-k8s-version/serial/Stop 8.31
336 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
337 TestStartStop/group/old-k8s-version/serial/SecondStart 471.74
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.17
341 TestStartStop/group/embed-certs/serial/Pause 1.79
343 TestStartStop/group/no-preload/serial/FirstStart 94.61
344 TestStartStop/group/no-preload/serial/DeployApp 9.28
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
346 TestStartStop/group/no-preload/serial/Stop 8.25
347 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
348 TestStartStop/group/no-preload/serial/SecondStart 300.33
349 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
350 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
352 TestStartStop/group/old-k8s-version/serial/Pause 1.69
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.38
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.27
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.35
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.11
360 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
362 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
363 TestStartStop/group/no-preload/serial/Pause 1.87
365 TestStartStop/group/newest-cni/serial/FirstStart 49.57
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
368 TestStartStop/group/newest-cni/serial/Stop 8.25
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
370 TestStartStop/group/newest-cni/serial/SecondStart 37.08
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
374 TestStartStop/group/newest-cni/serial/Pause 1.84
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
376 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
377 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.17
378 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.82
x
+
TestDownloadOnly/v1.16.0/json-events (7.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (7.3076953s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-858000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-858000: exit status 85 (403.814259ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-858000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |          |
	|         | -p download-only-858000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:40:31
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:40:31.435781   10246 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:40:31.436529   10246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:31.436537   10246 out.go:309] Setting ErrFile to fd 2...
	I1002 03:40:31.436544   10246 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:31.437073   10246 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	W1002 03:40:31.437256   10246 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17340-9782/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17340-9782/.minikube/config/config.json: no such file or directory
	I1002 03:40:31.439043   10246 out.go:303] Setting JSON to true
	I1002 03:40:31.461808   10246 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4199,"bootTime":1696239032,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 03:40:31.461913   10246 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:40:31.483512   10246 out.go:97] [download-only-858000] minikube v1.31.2 on Darwin 14.0
	I1002 03:40:31.505096   10246 out.go:169] MINIKUBE_LOCATION=17340
	W1002 03:40:31.483720   10246 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 03:40:31.483758   10246 notify.go:220] Checking for updates...
	I1002 03:40:31.549155   10246 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 03:40:31.570297   10246 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 03:40:31.591290   10246 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:40:31.613274   10246 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	W1002 03:40:31.656104   10246 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 03:40:31.656554   10246 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:40:31.688139   10246 out.go:97] Using the hyperkit driver based on user configuration
	I1002 03:40:31.688182   10246 start.go:298] selected driver: hyperkit
	I1002 03:40:31.688195   10246 start.go:902] validating driver "hyperkit" against <nil>
	I1002 03:40:31.688417   10246 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:40:31.688624   10246 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17340-9782/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1002 03:40:31.825370   10246 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1002 03:40:31.829342   10246 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:40:31.829360   10246 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1002 03:40:31.829386   10246 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 03:40:31.832047   10246 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I1002 03:40:31.832191   10246 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 03:40:31.832219   10246 cni.go:84] Creating CNI manager for ""
	I1002 03:40:31.832232   10246 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 03:40:31.832243   10246 start_flags.go:321] config:
	{Name:download-only-858000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-858000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:40:31.832502   10246 iso.go:125] acquiring lock: {Name:mkb1616e5312c7f7300d9edabdcb664e7c56c074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:40:31.854276   10246 out.go:97] Downloading VM boot image ...
	I1002 03:40:31.854409   10246 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 03:40:34.587988   10246 out.go:97] Starting control plane node download-only-858000 in cluster download-only-858000
	I1002 03:40:34.588019   10246 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:40:34.640045   10246 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1002 03:40:34.640087   10246 cache.go:57] Caching tarball of preloaded images
	I1002 03:40:34.640433   10246 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 03:40:34.663722   10246 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 03:40:34.663738   10246 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1002 03:40:34.743024   10246 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-858000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-858000 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=hyperkit : (5.358392242s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
--- PASS: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-858000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-858000: exit status 85 (326.887743ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-858000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |          |
	|         | -p download-only-858000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-858000 | jenkins | v1.31.2 | 02 Oct 23 03:40 PDT |          |
	|         | -p download-only-858000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 03:40:39
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.21.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 03:40:39.148698   10262 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:40:39.148969   10262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:39.148975   10262 out.go:309] Setting ErrFile to fd 2...
	I1002 03:40:39.148979   10262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:40:39.149160   10262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	W1002 03:40:39.149257   10262 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17340-9782/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17340-9782/.minikube/config/config.json: no such file or directory
	I1002 03:40:39.150517   10262 out.go:303] Setting JSON to true
	I1002 03:40:39.172098   10262 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4207,"bootTime":1696239032,"procs":491,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 03:40:39.172189   10262 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:40:39.195646   10262 out.go:97] [download-only-858000] minikube v1.31.2 on Darwin 14.0
	I1002 03:40:39.195921   10262 notify.go:220] Checking for updates...
	I1002 03:40:39.218061   10262 out.go:169] MINIKUBE_LOCATION=17340
	I1002 03:40:39.239935   10262 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 03:40:39.261700   10262 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 03:40:39.283840   10262 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:40:39.327602   10262 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	W1002 03:40:39.370812   10262 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 03:40:39.371603   10262 config.go:182] Loaded profile config "download-only-858000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1002 03:40:39.371680   10262 start.go:810] api.Load failed for download-only-858000: filestore "download-only-858000": Docker machine "download-only-858000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 03:40:39.371818   10262 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 03:40:39.371855   10262 start.go:810] api.Load failed for download-only-858000: filestore "download-only-858000": Docker machine "download-only-858000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 03:40:39.401575   10262 out.go:97] Using the hyperkit driver based on existing profile
	I1002 03:40:39.401619   10262 start.go:298] selected driver: hyperkit
	I1002 03:40:39.401630   10262 start.go:902] validating driver "hyperkit" against &{Name:download-only-858000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-858000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:40:39.402043   10262 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:40:39.402224   10262 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17340-9782/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I1002 03:40:39.411218   10262 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
	I1002 03:40:39.414968   10262 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:40:39.414986   10262 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I1002 03:40:39.417611   10262 cni.go:84] Creating CNI manager for ""
	I1002 03:40:39.417631   10262 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 03:40:39.417646   10262 start_flags.go:321] config:
	{Name:download-only-858000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-858000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:40:39.417768   10262 iso.go:125] acquiring lock: {Name:mkb1616e5312c7f7300d9edabdcb664e7c56c074 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 03:40:39.439580   10262 out.go:97] Starting control plane node download-only-858000 in cluster download-only-858000
	I1002 03:40:39.439598   10262 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:40:39.492280   10262 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 03:40:39.492310   10262 cache.go:57] Caching tarball of preloaded images
	I1002 03:40:39.492638   10262 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:40:39.515413   10262 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 03:40:39.515438   10262 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 03:40:39.596491   10262 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 03:40:42.756460   10262 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 03:40:42.756658   10262 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 03:40:43.379476   10262 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 03:40:43.379557   10262 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/download-only-858000/config.json ...
	I1002 03:40:43.379958   10262 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 03:40:43.380245   10262 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17340-9782/.minikube/cache/darwin/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-858000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-858000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (1.02s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-867000 --alsologtostderr --binary-mirror http://127.0.0.1:57124 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-867000
--- PASS: TestBinaryMirror (1.02s)

                                                
                                    
x
+
TestOffline (57.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-426000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-426000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (52.281192363s)
helpers_test.go:175: Cleaning up "offline-docker-426000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-426000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-426000: (5.292221188s)
--- PASS: TestOffline (57.57s)

                                                
                                    
x
+
TestAddons/Setup (128.2s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-334000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-darwin-amd64 start -p addons-334000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m8.203079448s)
--- PASS: TestAddons/Setup (128.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 9.207211ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-h7jlj" [0818f36a-0ca9-4e08-9c9b-8789198219a2] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011464034s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-597vh" [5e5a90dd-4c9e-4ec6-afb0-f2647f64b992] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007571001s
addons_test.go:318: (dbg) Run:  kubectl --context addons-334000 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-334000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-334000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.988539851s)
addons_test.go:337: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 ip
2023/10/02 03:43:09 [DEBUG] GET http://192.168.70.31:5000
addons_test.go:366: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.72s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-334000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-334000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-334000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [97baae8b-a35f-4a45-b04a-50c702e45069] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [97baae8b-a35f-4a45-b04a-50c702e45069] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.010795426s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context addons-334000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.70.31
addons_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-darwin-amd64 -p addons-334000 addons disable ingress-dns --alsologtostderr -v=1: (1.060324434s)
addons_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-darwin-amd64 -p addons-334000 addons disable ingress --alsologtostderr -v=1: (7.484610762s)
--- PASS: TestAddons/parallel/Ingress (20.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9phjd" [6579c030-3e9d-4872-9d25-33472d077623] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010459354s
addons_test.go:819: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-334000
addons_test.go:819: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-334000: (5.481705508s)
--- PASS: TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 2.768785ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-x47kt" [a585378e-8689-4ee4-aed1-2b3b3abfea53] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.01052618s
addons_test.go:393: (dbg) Run:  kubectl --context addons-334000 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 2.525484ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-hfs6q" [feb987bc-a2ed-4fb1-a9c8-e11b4952ca78] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009373239s
addons_test.go:451: (dbg) Run:  kubectl --context addons-334000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-334000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.081634144s)
addons_test.go:468: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 3.081479ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-334000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-334000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8e010c59-c6f0-45e3-a7e5-6a8e1fbfe7f0] Pending
helpers_test.go:344: "task-pv-pod" [8e010c59-c6f0-45e3-a7e5-6a8e1fbfe7f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8e010c59-c6f0-45e3-a7e5-6a8e1fbfe7f0] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.016470942s
addons_test.go:562: (dbg) Run:  kubectl --context addons-334000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-334000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-334000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-334000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-334000 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-334000 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-334000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-334000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1acd3415-c2d9-4157-a4f4-6e7c72b094b1] Pending
helpers_test.go:344: "task-pv-pod-restore" [1acd3415-c2d9-4157-a4f4-6e7c72b094b1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1acd3415-c2d9-4157-a4f4-6e7c72b094b1] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.013161254s
addons_test.go:604: (dbg) Run:  kubectl --context addons-334000 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-334000 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-334000 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-darwin-amd64 -p addons-334000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.445941844s)
addons_test.go:620: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-334000 --alsologtostderr -v=1
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-skcpw" [4ca75c37-a4ef-4b1f-974b-a2a38281b83b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-skcpw" [4ca75c37-a4ef-4b1f-974b-a2a38281b83b] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009206292s
--- PASS: TestAddons/parallel/Headlamp (11.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-8sh7n" [40b7c4b5-fce7-4b9d-9732-b11e8a0b29ac] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007552543s
addons_test.go:838: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-334000
--- PASS: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.93s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-334000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-334000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [276546a8-ee69-43a4-b1d3-c1c28da639f7] Pending
helpers_test.go:344: "test-local-path" [276546a8-ee69-43a4-b1d3-c1c28da639f7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [276546a8-ee69-43a4-b1d3-c1c28da639f7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [276546a8-ee69-43a4-b1d3-c1c28da639f7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005701986s
addons_test.go:869: (dbg) Run:  kubectl --context addons-334000 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 ssh "cat /opt/local-path-provisioner/pvc-3e972f2b-b162-4772-b83d-43828f83a6ec_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-334000 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-334000 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-darwin-amd64 -p addons-334000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-darwin-amd64 -p addons-334000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.850906123s)
--- PASS: TestAddons/parallel/LocalPath (51.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-334000 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-334000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-334000
addons_test.go:150: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-334000: (5.209872649s)
addons_test.go:154: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-334000
addons_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-334000
addons_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-334000
--- PASS: TestAddons/StoppedEnableDisable (5.72s)

                                                
                                    
x
+
TestCertOptions (38.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-941000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-941000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (34.44205047s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-941000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-941000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-941000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-941000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-941000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-941000: (3.401053581s)
--- PASS: TestCertOptions (38.16s)

                                                
                                    
x
+
TestCertExpiration (246.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-115000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-115000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (34.267157139s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-115000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E1002 04:15:42.192371   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-115000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (26.675914284s)
helpers_test.go:175: Cleaning up "cert-expiration-115000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-115000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-115000: (5.263898271s)
--- PASS: TestCertExpiration (246.21s)

                                                
                                    
x
+
TestDockerFlags (46.17s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E1002 04:11:33.998856   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (40.589414684s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-390000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-390000: (5.260138119s)
--- PASS: TestDockerFlags (46.17s)

                                                
                                    
x
+
TestForceSystemdFlag (39.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-169000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-169000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.598176135s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-169000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-169000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-169000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-169000: (3.597882131s)
--- PASS: TestForceSystemdFlag (39.37s)

                                                
                                    
x
+
TestForceSystemdEnv (43.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-124000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-124000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (37.803822587s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-124000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-124000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-124000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-124000: (5.452280502s)
--- PASS: TestForceSystemdEnv (43.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (5.97s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (5.97s)

                                                
                                    
x
+
TestErrorSpam/setup (34.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-781000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-781000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 --driver=hyperkit : (34.72976579s)
--- PASS: TestErrorSpam/setup (34.73s)

                                                
                                    
x
+
TestErrorSpam/start (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 start --dry-run
--- PASS: TestErrorSpam/start (1.46s)

                                                
                                    
x
+
TestErrorSpam/status (0.47s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 status
--- PASS: TestErrorSpam/status (0.47s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 unpause
--- PASS: TestErrorSpam/unpause (1.28s)

                                                
                                    
x
+
TestErrorSpam/stop (3.62s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 stop: (3.219613996s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-781000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-781000 stop
--- PASS: TestErrorSpam/stop (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17340-9782/.minikube/files/etc/test/nested/copy/10244/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-686000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (49.46533722s)
--- PASS: TestFunctional/serial/StartWithProxy (49.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-686000 --alsologtostderr -v=8: (38.640379955s)
functional_test.go:659: soft start took 38.64091858s for "functional-686000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-686000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:3.1: (1.583173042s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:3.3: (1.449919902s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 cache add registry.k8s.io/pause:latest: (1.364431412s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local107756795/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache add minikube-local-cache-test:functional-686000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache delete minikube-local-cache-test:functional-686000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-686000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (144.05241ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 kubectl -- --context functional-686000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-686000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-686000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.901363627s)
functional_test.go:757: restart took 39.901536131s for "functional-686000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-686000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 logs: (3.001640756s)
--- PASS: TestFunctional/serial/LogsCmd (3.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4156089508/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd4156089508/001/logs.txt: (2.85253179s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-686000 apply -f testdata/invalidsvc.yaml
E1002 03:47:55.141339   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.148589   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.159755   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.179961   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.222198   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.302621   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.463859   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:55.784807   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:47:56.425659   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-686000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-686000: exit status 115 (263.162816ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.70.33:31079 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-686000 delete -f testdata/invalidsvc.yaml
E1002 03:47:57.706809   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
functional_test.go:2323: (dbg) Done: kubectl --context functional-686000 delete -f testdata/invalidsvc.yaml: (1.418996615s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 config get cpus: exit status 14 (41.874255ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 config get cpus: exit status 14 (45.398643ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-686000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-686000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 11682: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-686000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (740.386539ms)

                                                
                                                
-- stdout --
	* [functional-686000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:00.597776   11596 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:00.598069   11596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:00.598074   11596 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:00.598078   11596 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:00.598272   11596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 03:49:00.599664   11596 out.go:303] Setting JSON to false
	I1002 03:49:00.623283   11596 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4708,"bootTime":1696239032,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 03:49:00.623389   11596 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:00.645214   11596 out.go:177] * [functional-686000] minikube v1.31.2 on Darwin 14.0
	I1002 03:49:00.757305   11596 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:00.719464   11596 notify.go:220] Checking for updates...
	I1002 03:49:00.837252   11596 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 03:49:00.900283   11596 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 03:49:00.944156   11596 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:00.965277   11596 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 03:49:01.023075   11596 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:01.060607   11596 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:01.060955   11596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:49:01.060999   11596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:49:01.069297   11596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58210
	I1002 03:49:01.069666   11596 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:49:01.070079   11596 main.go:141] libmachine: Using API Version  1
	I1002 03:49:01.070089   11596 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:49:01.070333   11596 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:49:01.070441   11596 main.go:141] libmachine: (functional-686000) Calling .DriverName
	I1002 03:49:01.070639   11596 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:01.070884   11596 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:49:01.070911   11596 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:49:01.079004   11596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58213
	I1002 03:49:01.079327   11596 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:49:01.079672   11596 main.go:141] libmachine: Using API Version  1
	I1002 03:49:01.079687   11596 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:49:01.080224   11596 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:49:01.080341   11596 main.go:141] libmachine: (functional-686000) Calling .DriverName
	I1002 03:49:01.109257   11596 out.go:177] * Using the hyperkit driver based on existing profile
	I1002 03:49:01.151282   11596 start.go:298] selected driver: hyperkit
	I1002 03:49:01.151298   11596 start.go:902] validating driver "hyperkit" against &{Name:functional-686000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-686000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.70.33 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:01.151438   11596 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:01.196150   11596 out.go:177] 
	W1002 03:49:01.233519   11596 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 03:49:01.255333   11596 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-686000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-686000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (848.723611ms)

                                                
                                                
-- stdout --
	* [functional-686000] minikube v1.31.2 sur Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:49:01.857776   11628 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:49:01.858054   11628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:01.858059   11628 out.go:309] Setting ErrFile to fd 2...
	I1002 03:49:01.858063   11628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:49:01.858227   11628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 03:49:01.859922   11628 out.go:303] Setting JSON to false
	I1002 03:49:01.882184   11628 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4709,"bootTime":1696239032,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W1002 03:49:01.882291   11628 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1002 03:49:01.903536   11628 out.go:177] * [functional-686000] minikube v1.31.2 sur Darwin 14.0
	I1002 03:49:01.945244   11628 out.go:177]   - MINIKUBE_LOCATION=17340
	I1002 03:49:01.945386   11628 notify.go:220] Checking for updates...
	I1002 03:49:01.989557   11628 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	I1002 03:49:02.032272   11628 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1002 03:49:02.075273   11628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 03:49:02.118460   11628 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	I1002 03:49:02.197358   11628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 03:49:02.241936   11628 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:49:02.242493   11628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:49:02.242566   11628 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:49:02.251115   11628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58262
	I1002 03:49:02.251483   11628 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:49:02.251903   11628 main.go:141] libmachine: Using API Version  1
	I1002 03:49:02.251934   11628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:49:02.252152   11628 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:49:02.252262   11628 main.go:141] libmachine: (functional-686000) Calling .DriverName
	I1002 03:49:02.252450   11628 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 03:49:02.252698   11628 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:49:02.252722   11628 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:49:02.260631   11628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58264
	I1002 03:49:02.261016   11628 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:49:02.261383   11628 main.go:141] libmachine: Using API Version  1
	I1002 03:49:02.261396   11628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:49:02.261579   11628 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:49:02.261686   11628 main.go:141] libmachine: (functional-686000) Calling .DriverName
	I1002 03:49:02.326387   11628 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I1002 03:49:02.384369   11628 start.go:298] selected driver: hyperkit
	I1002 03:49:02.384382   11628 start.go:902] validating driver "hyperkit" against &{Name:functional-686000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-686000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.70.33 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 03:49:02.384520   11628 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 03:49:02.502161   11628 out.go:177] 
	W1002 03:49:02.566269   11628 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 03:49:02.629070   11628 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-686000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-686000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-sqq92" [e528bf45-d4d6-4ab9-9f47-06c7b3361815] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-sqq92" [e528bf45-d4d6-4ab9-9f47-06c7b3361815] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.011232686s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.70.33:30333
functional_test.go:1674: http://192.168.70.33:30333: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-sqq92

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.70.33:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.70.33:30333
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [08b67436-8ef0-4c8e-b70e-bba0299d5340] Running
E1002 03:48:36.111930   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00861959s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-686000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-686000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-686000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6452daea-6290-4a4f-a833-416009215f08] Pending
helpers_test.go:344: "sp-pod" [6452daea-6290-4a4f-a833-416009215f08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6452daea-6290-4a4f-a833-416009215f08] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.013019794s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-686000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-686000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-686000 delete -f testdata/storage-provisioner/pod.yaml: (1.225643984s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-686000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a6e387b2-b327-498d-a715-1edd5a3729a3] Pending
helpers_test.go:344: "sp-pod" [a6e387b2-b327-498d-a715-1edd5a3729a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a6e387b2-b327-498d-a715-1edd5a3729a3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.016468727s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-686000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh -n functional-686000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 cp functional-686000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd701626273/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh -n functional-686000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-686000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-txb8j" [1aa0bd48-627a-4556-81b5-804bbe0bb4a3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1002 03:48:05.388122   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
helpers_test.go:344: "mysql-859648c796-txb8j" [1aa0bd48-627a-4556-81b5-804bbe0bb4a3] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.022050404s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686000 exec mysql-859648c796-txb8j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686000 exec mysql-859648c796-txb8j -- mysql -ppassword -e "show databases;": exit status 1 (132.205258ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686000 exec mysql-859648c796-txb8j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-686000 exec mysql-859648c796-txb8j -- mysql -ppassword -e "show databases;": exit status 1 (106.237732ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-686000 exec mysql-859648c796-txb8j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10244/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /etc/test/nested/copy/10244/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10244.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /etc/ssl/certs/10244.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10244.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /usr/share/ca-certificates/10244.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/102442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /etc/ssl/certs/102442.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/102442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /usr/share/ca-certificates/102442.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-686000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh "sudo systemctl is-active crio": exit status 1 (213.151721ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-686000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-686000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-686000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-686000 image ls --format short --alsologtostderr:
I1002 03:49:04.844002   11695 out.go:296] Setting OutFile to fd 1 ...
I1002 03:49:04.844223   11695 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:04.844230   11695 out.go:309] Setting ErrFile to fd 2...
I1002 03:49:04.844234   11695 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:04.844446   11695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
I1002 03:49:04.845089   11695 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:04.845186   11695 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:04.846128   11695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:04.846224   11695 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:04.856109   11695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58341
I1002 03:49:04.856638   11695 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:04.857164   11695 main.go:141] libmachine: Using API Version  1
I1002 03:49:04.857176   11695 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:04.857542   11695 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:04.857734   11695 main.go:141] libmachine: (functional-686000) Calling .GetState
I1002 03:49:04.857832   11695 main.go:141] libmachine: (functional-686000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1002 03:49:04.857911   11695 main.go:141] libmachine: (functional-686000) DBG | hyperkit pid from json: 10802
I1002 03:49:04.859638   11695 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:04.859668   11695 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:04.868849   11695 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58343
I1002 03:49:04.869340   11695 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:04.869691   11695 main.go:141] libmachine: Using API Version  1
I1002 03:49:04.869702   11695 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:04.869957   11695 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:04.870076   11695 main.go:141] libmachine: (functional-686000) Calling .DriverName
I1002 03:49:04.870243   11695 ssh_runner.go:195] Run: systemctl --version
I1002 03:49:04.870264   11695 main.go:141] libmachine: (functional-686000) Calling .GetSSHHostname
I1002 03:49:04.870357   11695 main.go:141] libmachine: (functional-686000) Calling .GetSSHPort
I1002 03:49:04.870428   11695 main.go:141] libmachine: (functional-686000) Calling .GetSSHKeyPath
I1002 03:49:04.870499   11695 main.go:141] libmachine: (functional-686000) Calling .GetSSHUsername
I1002 03:49:04.870590   11695 sshutil.go:53] new ssh client: &{IP:192.168.70.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/functional-686000/id_rsa Username:docker}
I1002 03:49:04.915412   11695 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 03:49:04.932115   11695 main.go:141] libmachine: Making call to close driver server
I1002 03:49:04.932146   11695 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:04.932349   11695 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:04.932359   11695 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:04.932365   11695 main.go:141] libmachine: Making call to close driver server
I1002 03:49:04.932370   11695 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:04.932399   11695 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:04.932601   11695 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:04.932614   11695 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:04.932614   11695 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-686000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-686000 | d815181499c6b | 1.24MB |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-686000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-686000 | 39435c038942d | 30B    |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| docker.io/library/nginx                     | alpine            | d571254277f6a | 42.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-686000 image ls --format table --alsologtostderr:
I1002 03:49:07.542839   11722 out.go:296] Setting OutFile to fd 1 ...
I1002 03:49:07.543126   11722 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:07.543131   11722 out.go:309] Setting ErrFile to fd 2...
I1002 03:49:07.543135   11722 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:07.543311   11722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
I1002 03:49:07.543942   11722 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:07.544031   11722 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:07.544369   11722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:07.544426   11722 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:07.552273   11722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58377
I1002 03:49:07.552799   11722 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:07.553211   11722 main.go:141] libmachine: Using API Version  1
I1002 03:49:07.553223   11722 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:07.553556   11722 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:07.553675   11722 main.go:141] libmachine: (functional-686000) Calling .GetState
I1002 03:49:07.553814   11722 main.go:141] libmachine: (functional-686000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1002 03:49:07.553938   11722 main.go:141] libmachine: (functional-686000) DBG | hyperkit pid from json: 10802
I1002 03:49:07.555355   11722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:07.555378   11722 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:07.563086   11722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58379
I1002 03:49:07.563420   11722 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:07.563793   11722 main.go:141] libmachine: Using API Version  1
I1002 03:49:07.563808   11722 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:07.564018   11722 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:07.564151   11722 main.go:141] libmachine: (functional-686000) Calling .DriverName
I1002 03:49:07.564326   11722 ssh_runner.go:195] Run: systemctl --version
I1002 03:49:07.564347   11722 main.go:141] libmachine: (functional-686000) Calling .GetSSHHostname
I1002 03:49:07.564444   11722 main.go:141] libmachine: (functional-686000) Calling .GetSSHPort
I1002 03:49:07.564574   11722 main.go:141] libmachine: (functional-686000) Calling .GetSSHKeyPath
I1002 03:49:07.564686   11722 main.go:141] libmachine: (functional-686000) Calling .GetSSHUsername
I1002 03:49:07.564783   11722 sshutil.go:53] new ssh client: &{IP:192.168.70.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/functional-686000/id_rsa Username:docker}
I1002 03:49:07.608861   11722 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 03:49:07.632790   11722 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.632801   11722 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.632933   11722 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:07.632979   11722 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.632995   11722 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:07.633004   11722 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.633010   11722 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.633139   11722 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.633139   11722 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:07.633148   11722 main.go:141] libmachine: Making call to close connection to plugin binary
2023/10/02 03:49:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-686000 image ls --format json --alsologtostderr:
[{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"d815181499c6b605b88745f93164eaa78ff05d948000b2c40b5c9e2b9f84ded6","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-686000"],"size":"1240000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a5
38410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size
":"683000"},{"id":"39435c038942d0a7c8e35c3db0155512fec69266d6f441483c181fbffacab4f0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-686000"],"size":"30"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-6860
00"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-686000 image ls --format json --alsologtostderr:
I1002 03:49:07.395563   11718 out.go:296] Setting OutFile to fd 1 ...
I1002 03:49:07.395774   11718 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:07.395779   11718 out.go:309] Setting ErrFile to fd 2...
I1002 03:49:07.395783   11718 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:07.395985   11718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
I1002 03:49:07.396614   11718 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:07.396706   11718 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:07.397082   11718 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:07.397129   11718 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:07.404640   11718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58372
I1002 03:49:07.405069   11718 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:07.405477   11718 main.go:141] libmachine: Using API Version  1
I1002 03:49:07.405490   11718 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:07.405689   11718 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:07.405780   11718 main.go:141] libmachine: (functional-686000) Calling .GetState
I1002 03:49:07.405860   11718 main.go:141] libmachine: (functional-686000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1002 03:49:07.405920   11718 main.go:141] libmachine: (functional-686000) DBG | hyperkit pid from json: 10802
I1002 03:49:07.407289   11718 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:07.407310   11718 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:07.414968   11718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58374
I1002 03:49:07.415276   11718 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:07.415641   11718 main.go:141] libmachine: Using API Version  1
I1002 03:49:07.415654   11718 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:07.415912   11718 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:07.416030   11718 main.go:141] libmachine: (functional-686000) Calling .DriverName
I1002 03:49:07.416187   11718 ssh_runner.go:195] Run: systemctl --version
I1002 03:49:07.416208   11718 main.go:141] libmachine: (functional-686000) Calling .GetSSHHostname
I1002 03:49:07.416296   11718 main.go:141] libmachine: (functional-686000) Calling .GetSSHPort
I1002 03:49:07.416379   11718 main.go:141] libmachine: (functional-686000) Calling .GetSSHKeyPath
I1002 03:49:07.416457   11718 main.go:141] libmachine: (functional-686000) Calling .GetSSHUsername
I1002 03:49:07.416524   11718 sshutil.go:53] new ssh client: &{IP:192.168.70.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/functional-686000/id_rsa Username:docker}
I1002 03:49:07.460931   11718 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 03:49:07.476785   11718 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.476794   11718 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.476958   11718 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.476971   11718 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:07.476979   11718 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:07.476982   11718 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.476991   11718 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.477114   11718 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.477117   11718 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:07.477125   11718 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-686000 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-686000
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 39435c038942d0a7c8e35c3db0155512fec69266d6f441483c181fbffacab4f0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-686000
size: "30"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-686000 image ls --format yaml --alsologtostderr:
I1002 03:49:05.002652   11699 out.go:296] Setting OutFile to fd 1 ...
I1002 03:49:05.002872   11699 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:05.002877   11699 out.go:309] Setting ErrFile to fd 2...
I1002 03:49:05.002881   11699 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:05.003069   11699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
I1002 03:49:05.003750   11699 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:05.003841   11699 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:05.004175   11699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:05.004227   11699 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:05.011862   11699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58346
I1002 03:49:05.012280   11699 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:05.012724   11699 main.go:141] libmachine: Using API Version  1
I1002 03:49:05.012754   11699 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:05.012958   11699 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:05.013067   11699 main.go:141] libmachine: (functional-686000) Calling .GetState
I1002 03:49:05.013151   11699 main.go:141] libmachine: (functional-686000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1002 03:49:05.013213   11699 main.go:141] libmachine: (functional-686000) DBG | hyperkit pid from json: 10802
I1002 03:49:05.014637   11699 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:05.014660   11699 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:05.022287   11699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58348
I1002 03:49:05.022623   11699 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:05.022982   11699 main.go:141] libmachine: Using API Version  1
I1002 03:49:05.023002   11699 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:05.023188   11699 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:05.023284   11699 main.go:141] libmachine: (functional-686000) Calling .DriverName
I1002 03:49:05.023446   11699 ssh_runner.go:195] Run: systemctl --version
I1002 03:49:05.023467   11699 main.go:141] libmachine: (functional-686000) Calling .GetSSHHostname
I1002 03:49:05.023552   11699 main.go:141] libmachine: (functional-686000) Calling .GetSSHPort
I1002 03:49:05.023618   11699 main.go:141] libmachine: (functional-686000) Calling .GetSSHKeyPath
I1002 03:49:05.023693   11699 main.go:141] libmachine: (functional-686000) Calling .GetSSHUsername
I1002 03:49:05.023784   11699 sshutil.go:53] new ssh client: &{IP:192.168.70.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/functional-686000/id_rsa Username:docker}
I1002 03:49:05.068530   11699 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 03:49:05.087333   11699 main.go:141] libmachine: Making call to close driver server
I1002 03:49:05.087343   11699 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:05.087677   11699 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:05.087707   11699 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:05.087714   11699 main.go:141] libmachine: Making call to close driver server
I1002 03:49:05.087722   11699 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:05.087728   11699 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:05.087850   11699 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:05.087859   11699 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:05.087897   11699 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh pgrep buildkitd: exit status 1 (121.58922ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image build -t localhost/my-image:functional-686000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image build -t localhost/my-image:functional-686000 testdata/build --alsologtostderr: (1.966834499s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-686000 image build -t localhost/my-image:functional-686000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 782ce8be390c
Removing intermediate container 782ce8be390c
---> 699798e1adca
Step 3/3 : ADD content.txt /
---> d815181499c6
Successfully built d815181499c6
Successfully tagged localhost/my-image:functional-686000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-686000 image build -t localhost/my-image:functional-686000 testdata/build --alsologtostderr:
I1002 03:49:05.277270   11708 out.go:296] Setting OutFile to fd 1 ...
I1002 03:49:05.278103   11708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:05.278109   11708 out.go:309] Setting ErrFile to fd 2...
I1002 03:49:05.278113   11708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 03:49:05.278292   11708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
I1002 03:49:05.278886   11708 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:05.279531   11708 config.go:182] Loaded profile config "functional-686000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 03:49:05.279928   11708 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:05.279966   11708 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:05.287862   11708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58359
I1002 03:49:05.288296   11708 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:05.288756   11708 main.go:141] libmachine: Using API Version  1
I1002 03:49:05.288773   11708 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:05.289112   11708 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:05.289224   11708 main.go:141] libmachine: (functional-686000) Calling .GetState
I1002 03:49:05.289391   11708 main.go:141] libmachine: (functional-686000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1002 03:49:05.289508   11708 main.go:141] libmachine: (functional-686000) DBG | hyperkit pid from json: 10802
I1002 03:49:05.290900   11708 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1002 03:49:05.290922   11708 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1002 03:49:05.298935   11708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:58361
I1002 03:49:05.299289   11708 main.go:141] libmachine: () Calling .GetVersion
I1002 03:49:05.299622   11708 main.go:141] libmachine: Using API Version  1
I1002 03:49:05.299639   11708 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 03:49:05.299842   11708 main.go:141] libmachine: () Calling .GetMachineName
I1002 03:49:05.299942   11708 main.go:141] libmachine: (functional-686000) Calling .DriverName
I1002 03:49:05.300110   11708 ssh_runner.go:195] Run: systemctl --version
I1002 03:49:05.300132   11708 main.go:141] libmachine: (functional-686000) Calling .GetSSHHostname
I1002 03:49:05.300222   11708 main.go:141] libmachine: (functional-686000) Calling .GetSSHPort
I1002 03:49:05.300304   11708 main.go:141] libmachine: (functional-686000) Calling .GetSSHKeyPath
I1002 03:49:05.300390   11708 main.go:141] libmachine: (functional-686000) Calling .GetSSHUsername
I1002 03:49:05.300466   11708 sshutil.go:53] new ssh client: &{IP:192.168.70.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/functional-686000/id_rsa Username:docker}
I1002 03:49:05.343926   11708 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2729668316.tar
I1002 03:49:05.344018   11708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 03:49:05.350370   11708 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2729668316.tar
I1002 03:49:05.353170   11708 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2729668316.tar: stat -c "%s %y" /var/lib/minikube/build/build.2729668316.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2729668316.tar': No such file or directory
I1002 03:49:05.353194   11708 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2729668316.tar --> /var/lib/minikube/build/build.2729668316.tar (3072 bytes)
I1002 03:49:05.369683   11708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2729668316
I1002 03:49:05.375676   11708 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2729668316 -xf /var/lib/minikube/build/build.2729668316.tar
I1002 03:49:05.381361   11708 docker.go:340] Building image: /var/lib/minikube/build/build.2729668316
I1002 03:49:05.381431   11708 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-686000 /var/lib/minikube/build/build.2729668316
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1002 03:49:07.137141   11708 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-686000 /var/lib/minikube/build/build.2729668316: (1.75565377s)
I1002 03:49:07.137202   11708 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2729668316
I1002 03:49:07.145022   11708 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2729668316.tar
I1002 03:49:07.155023   11708 build_images.go:207] Built localhost/my-image:functional-686000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2729668316.tar
I1002 03:49:07.155050   11708 build_images.go:123] succeeded building to: functional-686000
I1002 03:49:07.155066   11708 build_images.go:124] failed building to: 
I1002 03:49:07.155090   11708 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.155098   11708 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.155250   11708 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
I1002 03:49:07.155257   11708 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.155265   11708 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:07.155274   11708 main.go:141] libmachine: Making call to close driver server
I1002 03:49:07.155280   11708 main.go:141] libmachine: (functional-686000) Calling .Close
I1002 03:49:07.155385   11708 main.go:141] libmachine: Successfully made call to close driver server
I1002 03:49:07.155396   11708 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 03:49:07.155447   11708 main.go:141] libmachine: (functional-686000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.795662284s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-686000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-686000 docker-env) && out/minikube-darwin-amd64 status -p functional-686000"
E1002 03:48:00.267245   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-686000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr: (3.303938635s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr: (2.103749902s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.171053784s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-686000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image load --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr: (3.166726971s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image save gcr.io/google-containers/addon-resizer:functional-686000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image save gcr.io/google-containers/addon-resizer:functional-686000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.203618114s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image rm gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
E1002 03:48:15.629078   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.129912056s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-686000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 image save --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-686000 image save --daemon gcr.io/google-containers/addon-resizer:functional-686000 --alsologtostderr: (1.167267812s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-686000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-686000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-686000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-7pxwt" [5c7de171-4b74-43da-869f-5e13ac7456d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-7pxwt" [5c7de171-4b74-43da-869f-5e13ac7456d3] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.01397393s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11392: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-686000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a0341c71-f7fd-4c7b-8ec8-da0096a6e67c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a0341c71-f7fd-4c7b-8ec8-da0096a6e67c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00912229s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service list -o json
functional_test.go:1493: Took "373.537621ms" to run "out/minikube-darwin-amd64 -p functional-686000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.70.33:32034
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.70.33:32034
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-686000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.100.77 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-686000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "198.045671ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "120.193981ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "256.769175ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "65.373248ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port279642651/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696243734131154000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port279642651/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696243734131154000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port279642651/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696243734131154000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port279642651/001/test-1696243734131154000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (163.864054ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 10:48 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 10:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 10:48 test-1696243734131154000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh cat /mount-9p/test-1696243734131154000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-686000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [22d9e36c-747e-4f18-8632-ec090dbffc3e] Pending
helpers_test.go:344: "busybox-mount" [22d9e36c-747e-4f18-8632-ec090dbffc3e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [22d9e36c-747e-4f18-8632-ec090dbffc3e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [22d9e36c-747e-4f18-8632-ec090dbffc3e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.019433681s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-686000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port279642651/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2292169039/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.070059ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2292169039/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh "sudo umount -f /mount-9p": exit status 1 (150.890622ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-686000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2292169039/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T" /mount1: exit status 1 (201.120307ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-686000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-686000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-686000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3583813564/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-686000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-686000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-686000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-239000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
E1002 03:50:38.996804   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-239000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m39.471109838s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.8s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons enable ingress --alsologtostderr -v=5: (11.804439876s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.80s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (46.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-239000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-239000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.503400705s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-239000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-239000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e44b6960-2cc2-431a-bc5d-ae74732a6551] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e44b6960-2cc2-431a-bc5d-ae74732a6551] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.011994861s
addons_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-239000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.70.35
addons_test.go:284: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons disable ingress-dns --alsologtostderr -v=1: (10.147904808s)
addons_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-239000 addons disable ingress --alsologtostderr -v=1: (7.286399915s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (46.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-478000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E1002 03:52:55.149161   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:53:02.334404   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.340883   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.351322   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.372313   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.412966   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.493782   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.654017   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:02.975309   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:03.616925   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:04.897156   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:07.457915   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:12.579126   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-478000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (49.091449238s)
--- PASS: TestJSONOutput/start/Command (49.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-478000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-478000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-478000 --output=json --user=testUser
E1002 03:53:22.819870   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:53:22.842437   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-478000 --output=json --user=testUser: (8.164014798s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-559000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-559000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (391.728281ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"78b9623f-13cf-47d8-b471-181af69831d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-559000] minikube v1.31.2 on Darwin 14.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6741a4f9-9e5d-461e-b35d-1ac9e1c30e13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17340"}}
	{"specversion":"1.0","id":"14cc7c1b-7b2e-4a04-91a8-752bf73b6a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig"}}
	{"specversion":"1.0","id":"01be33cb-7b95-454d-9f65-153893d7b738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"55c34cb3-2401-4be3-8f31-866ad6f8bc80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0ea5280-b6e3-43e9-9eb5-80dc185ef776","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube"}}
	{"specversion":"1.0","id":"02e2d93e-109f-447a-bfb3-8064efff8827","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d85949d-fe5b-48ff-9bba-c1784ae88b1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-559000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-559000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (16.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-416000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-416000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.410234047s)
--- PASS: TestMountStart/serial/StartWithMountFirst (16.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-416000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-416000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-432000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-432000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (15.275792594s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-416000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-416000 --alsologtostderr -v=5: (2.273300341s)
--- PASS: TestMountStart/serial/DeleteFirst (2.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-432000
E1002 03:54:24.264930   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-432000: (2.222170537s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-432000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-432000: (15.573880446s)
--- PASS: TestMountStart/serial/RestartStopped (16.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-432000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-369000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E1002 03:55:46.188838   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-369000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m30.024173677s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-369000 -- rollout status deployment/busybox: (3.032499221s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-mhm8j -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-rzsf2 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-mhm8j -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-rzsf2 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-mhm8j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-rzsf2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-mhm8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-mhm8j -- sh -c "ping -c 1 192.168.70.1"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-rzsf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-369000 -- exec busybox-5bc68d56bd-rzsf2 -- sh -c "ping -c 1 192.168.70.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-369000 -v 3 --alsologtostderr
E1002 03:56:34.030710   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.035903   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.046917   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.067342   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.109099   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.189871   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.350155   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:34.671439   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:35.312045   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:36.593636   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:39.155239   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:56:44.275543   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-369000 -v 3 --alsologtostderr: (32.487935268s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.81s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp testdata/cp-test.txt multinode-369000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2739035750/001/cp-test_multinode-369000.txt
E1002 03:56:54.515892   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000:/home/docker/cp-test.txt multinode-369000-m02:/home/docker/cp-test_multinode-369000_multinode-369000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test_multinode-369000_multinode-369000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000:/home/docker/cp-test.txt multinode-369000-m03:/home/docker/cp-test_multinode-369000_multinode-369000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test_multinode-369000_multinode-369000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp testdata/cp-test.txt multinode-369000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2739035750/001/cp-test_multinode-369000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m02:/home/docker/cp-test.txt multinode-369000:/home/docker/cp-test_multinode-369000-m02_multinode-369000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test_multinode-369000-m02_multinode-369000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m02:/home/docker/cp-test.txt multinode-369000-m03:/home/docker/cp-test_multinode-369000-m02_multinode-369000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test_multinode-369000-m02_multinode-369000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp testdata/cp-test.txt multinode-369000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2739035750/001/cp-test_multinode-369000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m03:/home/docker/cp-test.txt multinode-369000:/home/docker/cp-test_multinode-369000-m03_multinode-369000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000 "sudo cat /home/docker/cp-test_multinode-369000-m03_multinode-369000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 cp multinode-369000-m03:/home/docker/cp-test.txt multinode-369000-m02:/home/docker/cp-test_multinode-369000-m03_multinode-369000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 ssh -n multinode-369000-m02 "sudo cat /home/docker/cp-test_multinode-369000-m03_multinode-369000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-369000 node stop m03: (2.198079536s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-369000 status: exit status 7 (244.841657ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-369000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-369000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr: exit status 7 (238.732882ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-369000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-369000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 03:57:01.382921   12678 out.go:296] Setting OutFile to fd 1 ...
	I1002 03:57:01.383232   12678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:57:01.383237   12678 out.go:309] Setting ErrFile to fd 2...
	I1002 03:57:01.383241   12678 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 03:57:01.383425   12678 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 03:57:01.383604   12678 out.go:303] Setting JSON to false
	I1002 03:57:01.383627   12678 mustload.go:65] Loading cluster: multinode-369000
	I1002 03:57:01.383669   12678 notify.go:220] Checking for updates...
	I1002 03:57:01.383950   12678 config.go:182] Loaded profile config "multinode-369000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 03:57:01.383962   12678 status.go:255] checking status of multinode-369000 ...
	I1002 03:57:01.384308   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.384377   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.392305   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59235
	I1002 03:57:01.392674   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.393097   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.393121   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.393335   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.393447   12678 main.go:141] libmachine: (multinode-369000) Calling .GetState
	I1002 03:57:01.393563   12678 main.go:141] libmachine: (multinode-369000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 03:57:01.393595   12678 main.go:141] libmachine: (multinode-369000) DBG | hyperkit pid from json: 12343
	I1002 03:57:01.394908   12678 status.go:330] multinode-369000 host status = "Running" (err=<nil>)
	I1002 03:57:01.394927   12678 host.go:66] Checking if "multinode-369000" exists ...
	I1002 03:57:01.395143   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.395169   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.403063   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59237
	I1002 03:57:01.403425   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.403890   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.403909   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.404124   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.404241   12678 main.go:141] libmachine: (multinode-369000) Calling .GetIP
	I1002 03:57:01.404324   12678 host.go:66] Checking if "multinode-369000" exists ...
	I1002 03:57:01.404575   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.404597   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.412877   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59239
	I1002 03:57:01.413227   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.413587   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.413607   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.413804   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.413901   12678 main.go:141] libmachine: (multinode-369000) Calling .DriverName
	I1002 03:57:01.414053   12678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 03:57:01.414076   12678 main.go:141] libmachine: (multinode-369000) Calling .GetSSHHostname
	I1002 03:57:01.414151   12678 main.go:141] libmachine: (multinode-369000) Calling .GetSSHPort
	I1002 03:57:01.414243   12678 main.go:141] libmachine: (multinode-369000) Calling .GetSSHKeyPath
	I1002 03:57:01.414357   12678 main.go:141] libmachine: (multinode-369000) Calling .GetSSHUsername
	I1002 03:57:01.414453   12678 sshutil.go:53] new ssh client: &{IP:192.168.70.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/multinode-369000/id_rsa Username:docker}
	I1002 03:57:01.453947   12678 ssh_runner.go:195] Run: systemctl --version
	I1002 03:57:01.457353   12678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:57:01.466975   12678 kubeconfig.go:92] found "multinode-369000" server: "https://192.168.70.40:8443"
	I1002 03:57:01.466995   12678 api_server.go:166] Checking apiserver status ...
	I1002 03:57:01.467030   12678 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 03:57:01.475576   12678 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1940/cgroup
	I1002 03:57:01.481253   12678 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/pod1e05a17f845973e648b81f96778424fc/3b65f625ed1dd1b64f6c1cdd4c8a64386463b179f5c44500192fedc2243e3c12"
	I1002 03:57:01.481307   12678 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod1e05a17f845973e648b81f96778424fc/3b65f625ed1dd1b64f6c1cdd4c8a64386463b179f5c44500192fedc2243e3c12/freezer.state
	I1002 03:57:01.487342   12678 api_server.go:204] freezer state: "THAWED"
	I1002 03:57:01.487360   12678 api_server.go:253] Checking apiserver healthz at https://192.168.70.40:8443/healthz ...
	I1002 03:57:01.490657   12678 api_server.go:279] https://192.168.70.40:8443/healthz returned 200:
	ok
	I1002 03:57:01.490668   12678 status.go:421] multinode-369000 apiserver status = Running (err=<nil>)
	I1002 03:57:01.490675   12678 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 03:57:01.490686   12678 status.go:255] checking status of multinode-369000-m02 ...
	I1002 03:57:01.490915   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.490937   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.498884   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59243
	I1002 03:57:01.499269   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.499588   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.499607   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.499820   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.499924   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetState
	I1002 03:57:01.500001   12678 main.go:141] libmachine: (multinode-369000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 03:57:01.500062   12678 main.go:141] libmachine: (multinode-369000-m02) DBG | hyperkit pid from json: 12377
	I1002 03:57:01.501370   12678 status.go:330] multinode-369000-m02 host status = "Running" (err=<nil>)
	I1002 03:57:01.501378   12678 host.go:66] Checking if "multinode-369000-m02" exists ...
	I1002 03:57:01.501623   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.501645   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.509665   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59245
	I1002 03:57:01.509984   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.510468   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.510484   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.510765   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.510902   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetIP
	I1002 03:57:01.511020   12678 host.go:66] Checking if "multinode-369000-m02" exists ...
	I1002 03:57:01.511290   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.511328   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.519098   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59247
	I1002 03:57:01.519446   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.519797   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.519811   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.520029   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.520145   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .DriverName
	I1002 03:57:01.520275   12678 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 03:57:01.520287   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetSSHHostname
	I1002 03:57:01.520359   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetSSHPort
	I1002 03:57:01.520443   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetSSHKeyPath
	I1002 03:57:01.520522   12678 main.go:141] libmachine: (multinode-369000-m02) Calling .GetSSHUsername
	I1002 03:57:01.520600   12678 sshutil.go:53] new ssh client: &{IP:192.168.70.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17340-9782/.minikube/machines/multinode-369000-m02/id_rsa Username:docker}
	I1002 03:57:01.558246   12678 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 03:57:01.567394   12678 status.go:257] multinode-369000-m02 status: &{Name:multinode-369000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 03:57:01.567410   12678 status.go:255] checking status of multinode-369000-m03 ...
	I1002 03:57:01.567666   12678 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 03:57:01.567694   12678 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 03:57:01.575616   12678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59250
	I1002 03:57:01.576095   12678 main.go:141] libmachine: () Calling .GetVersion
	I1002 03:57:01.576507   12678 main.go:141] libmachine: Using API Version  1
	I1002 03:57:01.576523   12678 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 03:57:01.576804   12678 main.go:141] libmachine: () Calling .GetMachineName
	I1002 03:57:01.576927   12678 main.go:141] libmachine: (multinode-369000-m03) Calling .GetState
	I1002 03:57:01.577043   12678 main.go:141] libmachine: (multinode-369000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 03:57:01.577117   12678 main.go:141] libmachine: (multinode-369000-m03) DBG | hyperkit pid from json: 12458
	I1002 03:57:01.578427   12678 main.go:141] libmachine: (multinode-369000-m03) DBG | hyperkit pid 12458 missing from process table
	I1002 03:57:01.578446   12678 status.go:330] multinode-369000-m03 host status = "Stopped" (err=<nil>)
	I1002 03:57:01.578452   12678 status.go:343] host is not running, skipping remaining checks
	I1002 03:57:01.578458   12678 status.go:257] multinode-369000-m03 status: &{Name:multinode-369000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.68s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 node start m03 --alsologtostderr
E1002 03:57:14.998282   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-369000 node start m03 --alsologtostderr: (27.062007878s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (191.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-369000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-369000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-369000: (18.381736701s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr
E1002 03:57:55.156591   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 03:57:55.960472   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 03:58:02.341734   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:58:30.034954   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 03:59:17.884157   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr: (2m53.421419384s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-369000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (191.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-369000 node delete m03: (2.616469609s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-369000 stop: (16.353890734s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-369000 status: exit status 7 (65.347337ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-369000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr: exit status 7 (64.850066ms)

                                                
                                                
-- stdout --
	multinode-369000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-369000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 04:01:00.333984   12894 out.go:296] Setting OutFile to fd 1 ...
	I1002 04:01:00.334637   12894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:01:00.334653   12894 out.go:309] Setting ErrFile to fd 2...
	I1002 04:01:00.334660   12894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 04:01:00.335252   12894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17340-9782/.minikube/bin
	I1002 04:01:00.335448   12894 out.go:303] Setting JSON to false
	I1002 04:01:00.335470   12894 mustload.go:65] Loading cluster: multinode-369000
	I1002 04:01:00.335516   12894 notify.go:220] Checking for updates...
	I1002 04:01:00.335768   12894 config.go:182] Loaded profile config "multinode-369000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 04:01:00.335779   12894 status.go:255] checking status of multinode-369000 ...
	I1002 04:01:00.336135   12894 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:01:00.336194   12894 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:01:00.344821   12894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59431
	I1002 04:01:00.345205   12894 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:01:00.345651   12894 main.go:141] libmachine: Using API Version  1
	I1002 04:01:00.345665   12894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:01:00.345909   12894 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:01:00.346021   12894 main.go:141] libmachine: (multinode-369000) Calling .GetState
	I1002 04:01:00.346113   12894 main.go:141] libmachine: (multinode-369000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:01:00.346171   12894 main.go:141] libmachine: (multinode-369000) DBG | hyperkit pid from json: 12749
	I1002 04:01:00.347197   12894 main.go:141] libmachine: (multinode-369000) DBG | hyperkit pid 12749 missing from process table
	I1002 04:01:00.347232   12894 status.go:330] multinode-369000 host status = "Stopped" (err=<nil>)
	I1002 04:01:00.347242   12894 status.go:343] host is not running, skipping remaining checks
	I1002 04:01:00.347247   12894 status.go:257] multinode-369000 status: &{Name:multinode-369000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 04:01:00.347269   12894 status.go:255] checking status of multinode-369000-m02 ...
	I1002 04:01:00.347504   12894 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I1002 04:01:00.347551   12894 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I1002 04:01:00.355255   12894 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:59433
	I1002 04:01:00.355563   12894 main.go:141] libmachine: () Calling .GetVersion
	I1002 04:01:00.355896   12894 main.go:141] libmachine: Using API Version  1
	I1002 04:01:00.355914   12894 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 04:01:00.356113   12894 main.go:141] libmachine: () Calling .GetMachineName
	I1002 04:01:00.356212   12894 main.go:141] libmachine: (multinode-369000-m02) Calling .GetState
	I1002 04:01:00.356287   12894 main.go:141] libmachine: (multinode-369000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I1002 04:01:00.356349   12894 main.go:141] libmachine: (multinode-369000-m02) DBG | hyperkit pid from json: 12788
	I1002 04:01:00.357347   12894 main.go:141] libmachine: (multinode-369000-m02) DBG | hyperkit pid 12788 missing from process table
	I1002 04:01:00.357386   12894 status.go:330] multinode-369000-m02 host status = "Stopped" (err=<nil>)
	I1002 04:01:00.357397   12894 status.go:343] host is not running, skipping remaining checks
	I1002 04:01:00.357402   12894 status.go:257] multinode-369000-m02 status: &{Name:multinode-369000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (128.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E1002 04:01:34.037563   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:02:01.729358   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:02:55.165589   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:03:02.349310   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-369000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m7.970693923s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-369000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (128.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-369000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-369000-m02 --driver=hyperkit 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-369000-m02 --driver=hyperkit : exit status 14 (478.64997ms)

                                                
                                                
-- stdout --
	* [multinode-369000-m02] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-369000-m02' is duplicated with machine name 'multinode-369000-m02' in profile 'multinode-369000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-369000-m03 --driver=hyperkit 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-369000-m03 --driver=hyperkit : (36.629998683s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-369000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-369000: exit status 80 (250.287192ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-369000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-369000-m03 already exists in multinode-369000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-369000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-369000-m03: (3.352377136s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.75s)

                                                
                                    
x
+
TestPreload (161.71s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-870000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E1002 04:04:18.221194   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-870000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m8.398220977s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-870000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-870000 image pull gcr.io/k8s-minikube/busybox: (1.171326512s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-870000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-870000: (8.265802445s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-870000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E1002 04:06:34.046343   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-870000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m18.464447211s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-870000 image list
helpers_test.go:175: Cleaning up "test-preload-870000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-870000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-870000: (5.258587995s)
--- PASS: TestPreload (161.71s)

                                                
                                    
x
+
TestScheduledStopUnix (105.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-760000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-760000 --memory=2048 --driver=hyperkit : (34.658438499s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-760000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-760000 -n scheduled-stop-760000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-760000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-760000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-760000 -n scheduled-stop-760000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-760000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-760000 --schedule 15s
E1002 04:07:55.173380   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1002 04:08:02.358305   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-760000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-760000: exit status 7 (54.95117ms)

                                                
                                                
-- stdout --
	scheduled-stop-760000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-760000 -n scheduled-stop-760000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-760000 -n scheduled-stop-760000: exit status 7 (53.794337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-760000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-760000
--- PASS: TestScheduledStopUnix (105.98s)

                                                
                                    
x
+
TestSkaffold (108.99s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe120223207 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-811000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-811000 --memory=2600 --driver=hyperkit : (34.882768447s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe120223207 run --minikube-profile skaffold-811000 --kube-context skaffold-811000 --status-check=true --port-forward=false --interactive=false
E1002 04:09:25.359739   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe120223207 run --minikube-profile skaffold-811000 --kube-context skaffold-811000 --status-check=true --port-forward=false --interactive=false: (57.232998678s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-754d878bd5-lgkhg" [b2109d68-80e0-4671-b1c9-5535ad4e32c2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011822999s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-84fd655798-cklxq" [bcd91476-f608-4f2c-b051-d426c7cda5a1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00622768s
helpers_test.go:175: Cleaning up "skaffold-811000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-811000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-811000: (5.258405829s)
--- PASS: TestSkaffold (108.99s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (155.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.3213636523.exe start -p running-upgrade-029000 --memory=2200 --vm-driver=hyperkit 
E1002 04:12:55.127397   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:12:57.052255   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:13:02.309973   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.6.2.3213636523.exe start -p running-upgrade-029000 --memory=2200 --vm-driver=hyperkit : (1m26.809962598s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-029000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E1002 04:15:01.145286   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.151665   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.162972   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.184507   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.224760   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.306926   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.468212   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:01.789053   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:02.430163   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:03.712180   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:06.348791   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:15:11.469255   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-029000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m2.667936376s)
helpers_test.go:175: Cleaning up "running-upgrade-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-029000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-029000: (5.252024323s)
--- PASS: TestRunningBinaryUpgrade (155.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (153.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 
E1002 04:15:21.710382   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m13.681428325s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-228000
version_upgrade_test.go:240: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-228000: (8.285771597s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-228000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-228000 status --format={{.Host}}: exit status 7 (54.16431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (32.438696967s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-228000 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (512.307323ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-228000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-228000
	    minikube start -p kubernetes-upgrade-228000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2280002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-228000 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:288: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-228000 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=hyperkit : (34.696679756s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-228000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-228000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-228000: (3.59933045s)
--- PASS: TestKubernetesUpgrade (153.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17340
- KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2000069903/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2000069903/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2000069903/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2000069903/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.38s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin
- MINIKUBE_LOCATION=17340
- KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2312350473/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2312350473/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2312350473/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2312350473/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestPause/serial/Start (50.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-117000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E1002 04:17:55.132590   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-117000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (50.187271454s)
--- PASS: TestPause/serial/Start (50.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-005000
version_upgrade_test.go:219: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-005000: (3.110228493s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (390.392942ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-875000] minikube v1.31.2 on Darwin 14.0
	  - MINIKUBE_LOCATION=17340
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17340-9782/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17340-9782/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-875000 --driver=hyperkit 
E1002 04:18:02.318346   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-875000 --driver=hyperkit : (40.783131428s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-875000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --driver=hyperkit : (14.072581315s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-875000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-875000 status -o json: exit status 2 (136.031021ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-875000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-875000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-875000: (2.444179167s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-117000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-117000 --alsologtostderr -v=1 --driver=hyperkit : (41.065988727s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (17.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-875000 --no-kubernetes --driver=hyperkit : (17.763432616s)
--- PASS: TestNoKubernetes/serial/Start (17.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-875000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-875000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (117.015689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-875000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-875000: (2.225898909s)
--- PASS: TestNoKubernetes/serial/Stop (2.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-875000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-875000 --driver=hyperkit : (15.340748429s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-117000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-117000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-117000 --output=json --layout=cluster: exit status 2 (149.561501ms)

                                                
                                                
-- stdout --
	{"Name":"pause-117000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-117000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.15s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-117000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-117000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-117000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-117000 --alsologtostderr -v=5: (5.257414423s)
--- PASS: TestPause/serial/DeletePaused (5.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-875000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-875000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (125.050397ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m6.255852821s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E1002 04:20:28.923511   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m9.53667441s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4wxbl" [9c2ef68f-d403-4165-8d52-6f3ef2aa930c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012879915s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6b78p" [9b1c8086-2e98-4ac2-ad8c-a43b04d65290] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6b78p" [9b1c8086-2e98-4ac2-ad8c-a43b04d65290] Running
E1002 04:20:58.191877   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.008670865s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-2rwf5" [b3a3db22-c59c-4eed-a7e0-cbc7273f13ba] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.014562222s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (58.948486626s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v2x5h" [c09982d2-e7db-464f-af18-92e29ef786cd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v2x5h" [c09982d2-e7db-464f-af18-92e29ef786cd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.009573237s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (54.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (54.271702547s)
--- PASS: TestNetworkPlugins/group/false/Start (54.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kp6xt" [014805fd-a613-4843-ae68-dfe2440078bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kp6xt" [014805fd-a613-4843-ae68-dfe2440078bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.010188838s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nwdsw" [f0b5539c-6972-42f0-bb47-444be4ffa333] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nwdsw" [f0b5539c-6972-42f0-bb47-444be4ffa333] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.007619963s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (49.233477552s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (59.669627395s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hz476" [cc0f5997-cc18-4055-b054-d41361b03339] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hz476" [cc0f5997-cc18-4055-b054-d41361b03339] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.009622153s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m0.821883692s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-27m72" [a22802e3-af03-423f-acca-db9bb0b90e4e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.011217879s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-f7874" [9548b115-fb7a-4226-bc24-98b8e9958b94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-f7874" [9548b115-fb7a-4226-bc24-98b8e9958b94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.010864366s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E1002 04:25:01.157478   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-766000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (54.193187178s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-trghb" [bd6b603e-2fc0-43c9-9fd5-5a6026568a39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-trghb" [bd6b603e-2fc0-43c9-9fd5-5a6026568a39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.006563287s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-766000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (143.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-150000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-150000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m23.869877033s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (143.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-766000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-766000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cjsx8" [0b862887-db66-444b-85a4-2e2bfe073a90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cjsx8" [0b862887-db66-444b-85a4-2e2bfe073a90] Running
E1002 04:25:44.149824   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.155037   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.165606   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.187352   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.228499   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.309392   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.470301   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:44.791262   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:25:45.432815   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.009230254s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-766000 exec deployment/netcat -- nslookup kubernetes.default
E1002 04:25:46.714748   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-766000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E1002 04:41:34.040459   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:42:07.225824   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:42:15.427404   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:42:35.681579   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:42:39.919543   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:42:45.406073   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 04:42:53.686644   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:42:55.168218   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:43:02.352250   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-803000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:26:04.636486   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:26:05.382132   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 04:26:12.605804   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.612141   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.624209   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.645136   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.685378   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.766416   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:12.926974   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:13.248976   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:13.891114   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:15.171963   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:17.732144   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:22.853883   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:25.118371   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:26:33.094354   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:26:34.019530   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:26:53.575463   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-803000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (51.022381877s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-803000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cdd9291a-fdab-4a69-963e-313886db108b] Pending
helpers_test.go:344: "busybox" [cdd9291a-fdab-4a69-963e-313886db108b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cdd9291a-fdab-4a69-963e-313886db108b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.018110708s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-803000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-803000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-803000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-803000 --alsologtostderr -v=3
E1002 04:27:06.081396   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-803000 --alsologtostderr -v=3: (8.233231244s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-803000 -n embed-certs-803000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-803000 -n embed-certs-803000: exit status 7 (53.764078ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-803000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-803000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:27:15.405426   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.411211   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.421895   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.442849   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.484331   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.564401   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:15.725391   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:16.046873   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:16.688545   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:17.969672   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:20.531610   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:25.653978   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:34.538886   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:27:35.896520   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:27:39.899655   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:39.906120   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:39.916997   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:39.939174   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:39.980780   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:40.062269   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:40.223582   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:40.545783   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:41.186459   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:42.466861   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:45.027639   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:27:50.149644   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-803000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.28.2: (4m59.0505975s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-803000 -n embed-certs-803000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f158af68-5379-4291-bd6d-1739cdace3ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 04:27:55.147367   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:27:56.378264   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f158af68-5379-4291-bd6d-1739cdace3ed] Running
E1002 04:28:00.391741   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:28:02.333394   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.02229712s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-150000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-150000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-150000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-150000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-150000 --alsologtostderr -v=3: (8.306859042s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-150000 -n old-k8s-version-150000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-150000 -n old-k8s-version-150000: exit status 7 (54.089081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-150000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (471.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-150000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E1002 04:28:20.874036   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:28:28.005253   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:28:31.784066   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:31.789237   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:31.799398   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:31.820338   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:31.861869   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:31.943193   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:32.104932   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:32.425537   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:33.066443   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:34.347618   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:36.909079   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:37.340849   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:28:42.029626   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:52.270784   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:28:56.462233   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:29:01.835891   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:29:07.221152   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.227046   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.239039   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.260601   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.302363   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.384562   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.546939   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:07.868491   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:08.509238   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:09.790137   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:12.350384   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:12.751530   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:29:17.471052   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:27.711739   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:37.078657   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:29:48.193720   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:29:53.713335   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:29:59.263186   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:30:01.167044   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:30:03.581317   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.586592   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.597389   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.618891   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.658983   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.739903   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:03.961363   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:04.282345   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:04.922923   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:06.204593   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:08.765198   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:13.929259   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:23.758383   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:30:24.170702   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:29.155841   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:30:36.641549   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.646647   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.657475   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.677690   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.718286   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.799528   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:36.959733   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:37.279844   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:37.920362   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:39.201582   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:41.762031   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:44.158846   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:30:44.652364   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:30:46.884020   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:30:57.125701   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:31:11.850387   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:31:12.613828   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:31:15.636894   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:31:17.606499   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:31:24.301438   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:31:25.614177   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:31:34.027035   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:31:40.306698   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
E1002 04:31:51.079036   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:31:58.568756   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-150000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (7m51.58021626s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-150000 -n old-k8s-version-150000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (471.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dhcnp" [2fb1416c-f5ad-4a46-8a87-a40ccd0d2b39] Running
E1002 04:32:15.413356   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018264189s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dhcnp" [2fb1416c-f5ad-4a46-8a87-a40ccd0d2b39] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008414395s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-803000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-803000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-803000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-803000 -n embed-certs-803000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-803000 -n embed-certs-803000: exit status 2 (145.018343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-803000 -n embed-certs-803000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-803000 -n embed-certs-803000: exit status 2 (144.510863ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-803000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-803000 -n embed-certs-803000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-803000 -n embed-certs-803000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (94.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:32:39.908780   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:32:43.107857   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:32:47.538064   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:32:55.155159   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:33:02.339529   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 04:33:07.602750   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
E1002 04:33:20.491619   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:33:31.792901   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:33:59.481149   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (1m34.610367382s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (94.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87a5454f-c515-49eb-8d14-5781e14dfc39] Pending
helpers_test.go:344: "busybox" [87a5454f-c515-49eb-8d14-5781e14dfc39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 04:34:07.229419   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [87a5454f-c515-49eb-8d14-5781e14dfc39] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.012139118s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-113000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-113000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-113000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-113000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-113000 --alsologtostderr -v=3: (8.250997229s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000: exit status 7 (52.958112ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-113000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:34:34.922989   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:35:01.172756   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:35:03.589058   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:35:31.383004   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
E1002 04:35:36.648358   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:35:44.165065   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-113000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.28.2: (5m0.140571591s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-113000 -n no-preload-113000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dmjwb" [0a37a198-e551-4e63-9e7a-42bf11865873] Running
E1002 04:36:04.335198   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013934564s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-dmjwb" [0a37a198-e551-4e63-9e7a-42bf11865873] Running
E1002 04:36:12.620115   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007542247s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-150000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-150000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-150000 -n old-k8s-version-150000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-150000 -n old-k8s-version-150000: exit status 2 (147.445756ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-150000 -n old-k8s-version-150000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-150000 -n old-k8s-version-150000: exit status 2 (148.247853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-150000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-150000 -n old-k8s-version-150000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-150000 -n old-k8s-version-150000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-257000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:36:34.034881   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/ingress-addon-legacy-239000/client.crt: no such file or directory
E1002 04:37:15.419938   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/custom-flannel-766000/client.crt: no such file or directory
E1002 04:37:38.218332   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:37:39.915503   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/false-766000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-257000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (1m27.377907076s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-257000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d7adab4-5978-4674-800f-61d26d9fa049] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 04:37:53.680336   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:53.686635   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:53.697161   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:53.717530   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:53.758543   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:53.885108   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:54.045558   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [9d7adab4-5978-4674-800f-61d26d9fa049] Running
E1002 04:37:54.366614   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:55.008658   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:55.161601   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/addons-334000/client.crt: no such file or directory
E1002 04:37:56.289369   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:37:58.850329   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.017314434s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-257000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-257000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-257000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-257000 --alsologtostderr -v=3
E1002 04:38:02.346765   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/functional-686000/client.crt: no such file or directory
E1002 04:38:03.972321   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-257000 --alsologtostderr -v=3: (8.27062352s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000: exit status 7 (52.591729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-257000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-257000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:38:14.213490   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:38:31.799144   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/enable-default-cni-766000/client.crt: no such file or directory
E1002 04:38:34.694020   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
E1002 04:39:07.235047   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/flannel-766000/client.crt: no such file or directory
E1002 04:39:15.656478   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-257000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.28.2: (4m59.965753844s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xksvb" [b605cdb4-56b5-4011-a2c9-e755fd5bc6e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013854729s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xksvb" [b605cdb4-56b5-4011-a2c9-e755fd5bc6e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009730263s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-113000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-113000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-113000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-113000 -n no-preload-113000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-113000 -n no-preload-113000: exit status 2 (153.547979ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-113000 -n no-preload-113000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-113000 -n no-preload-113000: exit status 2 (153.871558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-113000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-113000 -n no-preload-113000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-113000 -n no-preload-113000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-324000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:40:01.180771   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/skaffold-811000/client.crt: no such file or directory
E1002 04:40:03.595034   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/bridge-766000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-324000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (49.574258043s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-324000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-324000 --alsologtostderr -v=3
E1002 04:40:36.655553   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kubenet-766000/client.crt: no such file or directory
E1002 04:40:37.579109   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/old-k8s-version-150000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-324000 --alsologtostderr -v=3: (8.251895769s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-324000 -n newest-cni-324000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-324000 -n newest-cni-324000: exit status 7 (52.938228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-324000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-324000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2
E1002 04:40:44.172386   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/kindnet-766000/client.crt: no such file or directory
E1002 04:41:12.626118   10244 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17340-9782/.minikube/profiles/calico-766000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-324000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.28.2: (36.922466729s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-324000 -n newest-cni-324000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-324000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-324000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-324000 -n newest-cni-324000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-324000 -n newest-cni-324000: exit status 2 (156.619738ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-324000 -n newest-cni-324000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-324000 -n newest-cni-324000: exit status 2 (154.969303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-324000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-324000 -n newest-cni-324000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-324000 -n newest-cni-324000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cp6cq" [0596370b-d21f-4085-88d6-f95d18a0d7cc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012133244s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cp6cq" [0596370b-d21f-4085-88d6-f95d18a0d7cc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007968251s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-257000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-257000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-257000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000: exit status 2 (144.580065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000: exit status 2 (144.460695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-257000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-257000 -n default-k8s-diff-port-257000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.82s)

                                                
                                    

Test skip (19/309)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-766000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-766000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-766000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-766000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-766000"

                                                
                                                
----------------------- debugLogs end: cilium-766000 [took: 5.223084739s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-766000
--- SKIP: TestNetworkPlugins/group/cilium (5.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-759000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
Copied to clipboard