Test Report: Hyperkit_macOS 15565

                    
                      1a22b9432724c1a7c0bfc1f92a18db163006c245:2023-01-27:27621
                    
                

Test fail (14/298)

x
+
TestCertExpiration (232.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 90 (20.327867131s)

                                                
                                                
-- stdout --
	* [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 90
E0127 20:02:47.091071    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0127 20:05:31.896145    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:52.377437    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-729000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (23.949517656s)
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
	* Updating the running hyperkit "cert-expiration-729000" VM ...
	* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Verifying Kubernetes components...
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "cert-expiration-729000" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-01-27 20:05:55.322302 -0800 PST m=+2143.754742106
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-729000 -n cert-expiration-729000
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p cert-expiration-729000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p cert-expiration-729000 logs -n 25: (2.037446667s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-035000 sudo                 | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-035000 sudo                 | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-035000 sudo                 | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-035000 sudo find            | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-035000 sudo crio            | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-035000                      | cilium-035000             | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | 27 Jan 23 20:00 PST |
	| start   | -p force-systemd-env-631000           | force-systemd-env-631000  | jenkins | v1.28.0 | 27 Jan 23 20:00 PST | 27 Jan 23 20:01 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p offline-docker-310000              | offline-docker-310000     | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
	| start   | -p force-systemd-flag-814000          | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:02 PST |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-631000              | force-systemd-env-631000  | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-631000           | force-systemd-env-631000  | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:01 PST |
	| start   | -p docker-flags-643000                | docker-flags-643000       | jenkins | v1.28.0 | 27 Jan 23 20:01 PST | 27 Jan 23 20:02 PST |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-814000             | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-814000          | force-systemd-flag-814000 | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
	| start   | -p cert-expiration-729000             | cert-expiration-729000    | jenkins | v1.28.0 | 27 Jan 23 20:02 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-643000 ssh               | docker-flags-643000       | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-643000 ssh               | docker-flags-643000       | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-643000                | docker-flags-643000       | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:02 PST |
	| start   | -p cert-options-460000                | cert-options-460000       | jenkins | v1.28.0 | 27 Jan 23 20:02 PST | 27 Jan 23 20:03 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | cert-options-460000 ssh               | cert-options-460000       | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-460000 -- sudo        | cert-options-460000       | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-460000                | cert-options-460000       | jenkins | v1.28.0 | 27 Jan 23 20:03 PST | 27 Jan 23 20:03 PST |
	| start   | -p running-upgrade-052000             | running-upgrade-052000    | jenkins | v1.28.0 | 27 Jan 23 20:04 PST | 27 Jan 23 20:05 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-729000             | cert-expiration-729000    | jenkins | v1.28.0 | 27 Jan 23 20:05 PST | 27 Jan 23 20:05 PST |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-052000             | running-upgrade-052000    | jenkins | v1.28.0 | 27 Jan 23 20:05 PST |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 20:05:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 20:05:31.430413   10037 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:05:31.430567   10037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:05:31.430571   10037 out.go:309] Setting ErrFile to fd 2...
	I0127 20:05:31.430574   10037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:05:31.430683   10037 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:05:31.431177   10037 out.go:303] Setting JSON to false
	I0127 20:05:31.449709   10037 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":3906,"bootTime":1674874825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 20:05:31.449793   10037 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:05:31.479973   10037 out.go:177] * [cert-expiration-729000] minikube v1.28.0 on Darwin 13.2
	I0127 20:05:31.521792   10037 notify.go:220] Checking for updates...
	I0127 20:05:31.543564   10037 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:05:31.564703   10037 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:05:31.585754   10037 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:05:31.606755   10037 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:05:31.628580   10037 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 20:05:31.649793   10037 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:05:31.672338   10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:05:31.673032   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:31.673106   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:31.680966   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52581
	I0127 20:05:31.681375   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:31.681771   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:31.681778   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:31.681974   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:31.682080   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:31.682214   10037 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:05:31.682465   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:31.682486   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:31.689207   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52583
	I0127 20:05:31.689553   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:31.689882   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:31.689890   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:31.690076   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:31.690170   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:31.717678   10037 out.go:177] * Using the hyperkit driver based on existing profile
	I0127 20:05:31.759488   10037 start.go:296] selected driver: hyperkit
	I0127 20:05:31.759569   10037 start.go:840] validating driver "hyperkit" against &{Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:05:31.759743   10037 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:05:31.763859   10037 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:05:31.763993   10037 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 20:05:31.771086   10037 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 20:05:31.774339   10037 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:31.774351   10037 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 20:05:31.774424   10037 cni.go:84] Creating CNI manager for ""
	I0127 20:05:31.774436   10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:05:31.774446   10037 start_flags.go:319] config:
	{Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namesp
ace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:05:31.774576   10037 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:05:31.816479   10037 out.go:177] * Starting control plane node cert-expiration-729000 in cluster cert-expiration-729000
	I0127 20:05:31.837589   10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:05:31.837665   10037 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 20:05:31.837689   10037 cache.go:57] Caching tarball of preloaded images
	I0127 20:05:31.837881   10037 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:05:31.837894   10037 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0127 20:05:31.838032   10037 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/config.json ...
	I0127 20:05:31.838906   10037 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:05:31.838951   10037 start.go:364] acquiring machines lock for cert-expiration-729000: {Name:mk69c04a34b14d26e3f74e414bcb566a33d5b215 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 20:05:31.839050   10037 start.go:368] acquired machines lock for "cert-expiration-729000" in 83.928µs
	I0127 20:05:31.839090   10037 start.go:96] Skipping create...Using existing machine configuration
	I0127 20:05:31.839104   10037 fix.go:55] fixHost starting: 
	I0127 20:05:31.839540   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:31.839565   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:31.847119   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52585
	I0127 20:05:31.847481   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:31.847863   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:31.847877   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:31.848074   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:31.848187   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:31.848279   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
	I0127 20:05:31.848371   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:31.848446   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
	I0127 20:05:31.849313   10037 fix.go:103] recreateIfNeeded on cert-expiration-729000: state=Running err=<nil>
	W0127 20:05:31.849324   10037 fix.go:129] unexpected machine state, will restart: <nil>
	I0127 20:05:31.891634   10037 out.go:177] * Updating the running hyperkit "cert-expiration-729000" VM ...
	I0127 20:05:27.314666    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:27.814670    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:28.312595    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:28.814565    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:29.313942    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:29.813568    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:30.314556    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:30.814517    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:31.312771    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:31.813046    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:31.912880   10037 machine.go:88] provisioning docker machine ...
	I0127 20:05:31.912947   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:31.913259   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
	I0127 20:05:31.913480   10037 buildroot.go:166] provisioning hostname "cert-expiration-729000"
	I0127 20:05:31.913497   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
	I0127 20:05:31.913693   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:31.913864   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:31.914051   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:31.914255   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:31.914448   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:31.914680   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:31.914990   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:31.915001   10037 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-729000 && echo "cert-expiration-729000" | sudo tee /etc/hostname
	I0127 20:05:32.005292   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-729000
	
	I0127 20:05:32.005305   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.005433   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.005509   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.005592   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.005677   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.005790   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:32.005915   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:32.005928   10037 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-729000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-729000/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-729000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:05:32.084026   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:05:32.084037   10037 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3235/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3235/.minikube}
	I0127 20:05:32.084051   10037 buildroot.go:174] setting up certificates
	I0127 20:05:32.084060   10037 provision.go:83] configureAuth start
	I0127 20:05:32.084065   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetMachineName
	I0127 20:05:32.084203   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetIP
	I0127 20:05:32.084289   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.084367   10037 provision.go:138] copyHostCerts
	I0127 20:05:32.084446   10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem, removing ...
	I0127 20:05:32.084454   10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem
	I0127 20:05:32.097697   10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem (1082 bytes)
	I0127 20:05:32.098068   10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem, removing ...
	I0127 20:05:32.098079   10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem
	I0127 20:05:32.098237   10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem (1123 bytes)
	I0127 20:05:32.098509   10037 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem, removing ...
	I0127 20:05:32.098515   10037 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem
	I0127 20:05:32.098636   10037 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem (1675 bytes)
	I0127 20:05:32.098849   10037 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-729000 san=[192.168.64.23 192.168.64.23 localhost 127.0.0.1 minikube cert-expiration-729000]
	I0127 20:05:32.154738   10037 provision.go:172] copyRemoteCerts
	I0127 20:05:32.154792   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:05:32.154811   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.154937   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.155046   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.155128   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.155227   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	I0127 20:05:32.200282   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 20:05:32.215501   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 20:05:32.230837   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 20:05:32.246059   10037 provision.go:86] duration metric: configureAuth took 161.987414ms
	I0127 20:05:32.246067   10037 buildroot.go:189] setting minikube options for container-runtime
	I0127 20:05:32.246204   10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:05:32.246218   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.246351   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.246448   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.246525   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.246614   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.246685   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.246779   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:32.246878   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:32.246883   10037 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:05:32.324943   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 20:05:32.324949   10037 buildroot.go:70] root file system type: tmpfs
	I0127 20:05:32.325081   10037 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:05:32.325100   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.325224   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.325296   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.325377   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.325454   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.325593   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:32.325700   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:32.325745   10037 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:05:32.414803   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:05:32.414820   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.414946   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.415032   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.415137   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.415217   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.415337   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:32.415450   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:32.415459   10037 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:05:32.497800   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:05:32.497806   10037 machine.go:91] provisioned docker machine in 584.931958ms
	I0127 20:05:32.497814   10037 start.go:300] post-start starting for "cert-expiration-729000" (driver="hyperkit")
	I0127 20:05:32.497818   10037 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:05:32.497828   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.498007   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:05:32.498016   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.498104   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.498185   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.498251   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.498326   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	I0127 20:05:32.543474   10037 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:05:32.546025   10037 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 20:05:32.546037   10037 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/addons for local assets ...
	I0127 20:05:32.546116   10037 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/files for local assets ...
	I0127 20:05:32.546259   10037 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem -> 44422.pem in /etc/ssl/certs
	I0127 20:05:32.546409   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:05:32.551985   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /etc/ssl/certs/44422.pem (1708 bytes)
	I0127 20:05:32.568131   10037 start.go:303] post-start completed in 70.312991ms
	I0127 20:05:32.568143   10037 fix.go:57] fixHost completed within 729.063605ms
	I0127 20:05:32.568156   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.568281   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.568381   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.568500   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.568584   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.568688   10037 main.go:141] libmachine: Using SSH client type: native
	I0127 20:05:32.568798   10037 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.23 22 <nil> <nil>}
	I0127 20:05:32.568803   10037 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0127 20:05:32.645897   10037 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674878732.859804806
	
	I0127 20:05:32.645903   10037 fix.go:207] guest clock: 1674878732.859804806
	I0127 20:05:32.645907   10037 fix.go:220] Guest: 2023-01-27 20:05:32.859804806 -0800 PST Remote: 2023-01-27 20:05:32.568146 -0800 PST m=+1.187546836 (delta=291.658806ms)
	I0127 20:05:32.645926   10037 fix.go:191] guest clock delta is within tolerance: 291.658806ms
	I0127 20:05:32.645929   10037 start.go:83] releasing machines lock for "cert-expiration-729000", held for 806.892226ms
	I0127 20:05:32.645944   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.646069   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetIP
	I0127 20:05:32.646154   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.646466   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.646594   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:32.646682   10037 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 20:05:32.646708   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.646724   10037 ssh_runner.go:195] Run: cat /version.json
	I0127 20:05:32.646732   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:32.646804   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.646830   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:32.646886   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.646922   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:32.646952   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.646985   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:32.647020   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	I0127 20:05:32.647079   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	W0127 20:05:32.687525   10037 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0127 20:05:32.687589   10037 ssh_runner.go:195] Run: systemctl --version
	I0127 20:05:32.751318   10037 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 20:05:32.755512   10037 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 20:05:32.755580   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 20:05:32.761318   10037 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0127 20:05:32.771947   10037 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 20:05:32.777308   10037 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0127 20:05:32.777316   10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:05:32.777386   10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:05:32.793076   10037 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 20:05:32.793090   10037 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:05:32.793094   10037 start.go:472] detecting cgroup driver to use...
	I0127 20:05:32.793172   10037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:05:32.805431   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0127 20:05:32.811696   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:05:32.818019   10037 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:05:32.818072   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:05:32.824633   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:05:32.830923   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:05:32.837823   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:05:32.844685   10037 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:05:32.851797   10037 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:05:32.858655   10037 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:05:32.864826   10037 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:05:32.871028   10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:05:32.961920   10037 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:05:32.974188   10037 start.go:472] detecting cgroup driver to use...
	I0127 20:05:32.974252   10037 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:05:32.983819   10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:05:32.992737   10037 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 20:05:33.005065   10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:05:33.013689   10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:05:33.022060   10037 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:05:33.034812   10037 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:05:33.121278   10037 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:05:33.217290   10037 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:05:33.217303   10037 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:05:33.228482   10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:05:33.323545   10037 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:05:34.547424   10037 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.223893832s)
	I0127 20:05:34.547475   10037 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:05:34.632374   10037 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 20:05:34.716786   10037 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:05:34.799783   10037 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:05:34.887497   10037 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 20:05:34.902733   10037 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 20:05:34.902807   10037 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 20:05:34.912654   10037 start.go:540] Will wait 60s for crictl version
	I0127 20:05:34.912707   10037 ssh_runner.go:195] Run: which crictl
	I0127 20:05:34.915300   10037 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 20:05:34.979884   10037 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0127 20:05:34.979952   10037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:05:35.002679   10037 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 20:05:35.071662   10037 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0127 20:05:35.071815   10037 ssh_runner.go:195] Run: grep 192.168.64.1	host.minikube.internal$ /etc/hosts
	I0127 20:05:35.075875   10037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:05:35.084734   10037 localpath.go:92] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/client.crt -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.crt
	I0127 20:05:35.085003   10037 localpath.go:117] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/client.key -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.key
	I0127 20:05:35.085182   10037 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:05:35.085232   10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:05:35.102018   10037 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 20:05:35.102025   10037 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:05:35.102093   10037 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:05:35.118516   10037 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 20:05:35.118531   10037 cache_images.go:84] Images are preloaded, skipping loading
	I0127 20:05:35.118599   10037 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 20:05:35.145820   10037 cni.go:84] Creating CNI manager for ""
	I0127 20:05:35.145831   10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:05:35.145851   10037 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0127 20:05:35.145864   10037 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.23 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-729000 NodeName:cert-expiration-729000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0127 20:05:35.145952   10037 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.64.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "cert-expiration-729000"
	  kubeletExtraArgs:
	    node-ip: 192.168.64.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.64.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 20:05:35.146039   10037 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cert-expiration-729000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0127 20:05:35.146095   10037 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0127 20:05:35.152454   10037 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 20:05:35.152497   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 20:05:35.158468   10037 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (455 bytes)
	I0127 20:05:35.169722   10037 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 20:05:35.180566   10037 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0127 20:05:35.191593   10037 ssh_runner.go:195] Run: grep 192.168.64.23	control-plane.minikube.internal$ /etc/hosts
	I0127 20:05:35.193776   10037 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 20:05:35.201372   10037 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000 for IP: 192.168.64.23
	I0127 20:05:35.201382   10037 certs.go:186] acquiring lock for shared ca certs: {Name:mk29c07f32f81afc524ae789005062e84bfc25e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:35.201522   10037 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.key
	I0127 20:05:35.201573   10037 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.key
	I0127 20:05:35.201658   10037 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/client.key
	I0127 20:05:35.201677   10037 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca
	I0127 20:05:35.201694   10037 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca with IP's: [192.168.64.23 10.96.0.1 127.0.0.1 10.0.0.1]
	I0127 20:05:35.325279   10037 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca ...
	I0127 20:05:35.325290   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca: {Name:mk4d91c120259812f82f819b1b530e466fc67aec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:35.325577   10037 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca ...
	I0127 20:05:35.325582   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca: {Name:mk083b3eef99ce5d463fa9d03b82e06737dfbb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:35.325756   10037 certs.go:333] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt.7d9037ca -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt
	I0127 20:05:35.326068   10037 certs.go:337] copying /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key.7d9037ca -> /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key
	I0127 20:05:35.326310   10037 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key
	I0127 20:05:35.326326   10037 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt with IP's: []
	I0127 20:05:35.402161   10037 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt ...
	I0127 20:05:35.402168   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt: {Name:mk4ff3a897a7964e3c4ef42aadbfba8d3de95f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:35.402389   10037 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key ...
	I0127 20:05:35.402393   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key: {Name:mk62d9b128d7b53edaec6a6ba328bfd0e5b97f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:35.402756   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442.pem (1338 bytes)
	W0127 20:05:35.402790   10037 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442_empty.pem, impossibly tiny 0 bytes
	I0127 20:05:35.402798   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem (1679 bytes)
	I0127 20:05:35.402828   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem (1082 bytes)
	I0127 20:05:35.402855   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem (1123 bytes)
	I0127 20:05:35.402882   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem (1675 bytes)
	I0127 20:05:35.402940   10037 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem (1708 bytes)
	I0127 20:05:35.403401   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0127 20:05:35.419766   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 20:05:35.435246   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 20:05:35.450624   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/cert-expiration-729000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0127 20:05:35.465897   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 20:05:35.481529   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0127 20:05:35.496878   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 20:05:35.512119   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0127 20:05:35.527392   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/4442.pem --> /usr/share/ca-certificates/4442.pem (1338 bytes)
	I0127 20:05:35.542606   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /usr/share/ca-certificates/44422.pem (1708 bytes)
	I0127 20:05:35.558088   10037 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 20:05:35.573127   10037 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 20:05:35.584246   10037 ssh_runner.go:195] Run: openssl version
	I0127 20:05:35.587539   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4442.pem && ln -fs /usr/share/ca-certificates/4442.pem /etc/ssl/certs/4442.pem"
	I0127 20:05:35.594409   10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4442.pem
	I0127 20:05:35.597250   10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 03:34 /usr/share/ca-certificates/4442.pem
	I0127 20:05:35.597284   10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4442.pem
	I0127 20:05:35.600762   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4442.pem /etc/ssl/certs/51391683.0"
	I0127 20:05:35.607660   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/44422.pem && ln -fs /usr/share/ca-certificates/44422.pem /etc/ssl/certs/44422.pem"
	I0127 20:05:35.614726   10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/44422.pem
	I0127 20:05:35.617559   10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 03:34 /usr/share/ca-certificates/44422.pem
	I0127 20:05:35.617590   10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/44422.pem
	I0127 20:05:35.620982   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/44422.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 20:05:35.627560   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 20:05:35.634354   10037 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:05:35.637216   10037 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:05:35.637245   10037 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 20:05:35.640648   10037 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 20:05:35.647393   10037 kubeadm.go:401] StartCluster: {Name:cert-expiration-729000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.26.1 ClusterName:cert-expiration-729000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:05:35.647472   10037 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 20:05:35.663103   10037 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 20:05:35.669486   10037 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 20:05:35.675606   10037 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 20:05:35.681822   10037 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 20:05:35.681840   10037 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 20:05:35.747445   10037 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0127 20:05:35.747548   10037 kubeadm.go:322] [preflight] Running pre-flight checks
	I0127 20:05:35.899233   10037 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 20:05:35.899313   10037 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 20:05:35.899384   10037 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0127 20:05:36.006971   10037 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 20:05:36.031134   10037 out.go:204]   - Generating certificates and keys ...
	I0127 20:05:36.031200   10037 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0127 20:05:36.031249   10037 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0127 20:05:36.107640   10037 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 20:05:36.358376   10037 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0127 20:05:32.313946    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:32.813573    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:33.313151    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:33.812358    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:34.312979    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:34.812234    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:35.313398    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:35.812812    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:36.312348    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:36.812404    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:36.503062   10037 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0127 20:05:36.954367   10037 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0127 20:05:37.070016   10037 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0127 20:05:37.070133   10037 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-729000 localhost] and IPs [192.168.64.23 127.0.0.1 ::1]
	I0127 20:05:37.336269   10037 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0127 20:05:37.336543   10037 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-729000 localhost] and IPs [192.168.64.23 127.0.0.1 ::1]
	I0127 20:05:37.659751   10037 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 20:05:37.902749   10037 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 20:05:38.151772   10037 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0127 20:05:38.151847   10037 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 20:05:38.255575   10037 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 20:05:38.634544   10037 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 20:05:38.837442   10037 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 20:05:39.068915   10037 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 20:05:39.079354   10037 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 20:05:39.080353   10037 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 20:05:39.080454   10037 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0127 20:05:39.170078   10037 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 20:05:39.195775   10037 out.go:204]   - Booting up control plane ...
	I0127 20:05:39.195861   10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 20:05:39.195939   10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 20:05:39.196000   10037 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 20:05:39.196071   10037 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 20:05:39.196188   10037 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0127 20:05:37.312379    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:37.814101    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:38.312806    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:38.812537    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:39.313304    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:39.812630    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:40.313115    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:40.812862    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:41.313786    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:41.813804    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:42.312911    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:42.813263    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:43.314156    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:43.812685    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:44.312470    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:44.812340    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:45.313554    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:45.813228    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:46.313716    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:46.813699    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:49.674793   10037 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.501763 seconds
	I0127 20:05:49.674884   10037 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 20:05:49.683872   10037 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 20:05:47.311991    9771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:47.318121    9771 api_server.go:71] duration metric: took 22.013115013s to wait for apiserver process to appear ...
	I0127 20:05:47.318136    9771 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:05:47.318148    9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
	I0127 20:05:52.699590   10037 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 20:05:52.699750   10037 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-729000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 20:05:53.207742   10037 kubeadm.go:322] [bootstrap-token] Using token: rrg0sd.e2ykdqsntf61sc8i
	I0127 20:05:53.246595   10037 out.go:204]   - Configuring RBAC rules ...
	I0127 20:05:53.246717   10037 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 20:05:53.248053   10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 20:05:53.254260   10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 20:05:53.257283   10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 20:05:53.260065   10037 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 20:05:53.263640   10037 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 20:05:53.272369   10037 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 20:05:53.443381   10037 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0127 20:05:53.651092   10037 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0127 20:05:53.651982   10037 kubeadm.go:322] 
	I0127 20:05:53.652043   10037 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0127 20:05:53.652048   10037 kubeadm.go:322] 
	I0127 20:05:53.652122   10037 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0127 20:05:53.652128   10037 kubeadm.go:322] 
	I0127 20:05:53.652144   10037 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0127 20:05:53.652202   10037 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 20:05:53.652236   10037 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 20:05:53.652245   10037 kubeadm.go:322] 
	I0127 20:05:53.652286   10037 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0127 20:05:53.652289   10037 kubeadm.go:322] 
	I0127 20:05:53.652334   10037 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 20:05:53.652338   10037 kubeadm.go:322] 
	I0127 20:05:53.652379   10037 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0127 20:05:53.652442   10037 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 20:05:53.652496   10037 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 20:05:53.652511   10037 kubeadm.go:322] 
	I0127 20:05:53.652571   10037 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 20:05:53.652615   10037 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0127 20:05:53.652617   10037 kubeadm.go:322] 
	I0127 20:05:53.652675   10037 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rrg0sd.e2ykdqsntf61sc8i \
	I0127 20:05:53.652751   10037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:76459747d447fbe53349461588d71983b7f5033bb09648befce7f96802f57b57 \
	I0127 20:05:53.652764   10037 kubeadm.go:322] 	--control-plane 
	I0127 20:05:53.652766   10037 kubeadm.go:322] 
	I0127 20:05:53.652829   10037 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0127 20:05:53.652833   10037 kubeadm.go:322] 
	I0127 20:05:53.652892   10037 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rrg0sd.e2ykdqsntf61sc8i \
	I0127 20:05:53.652989   10037 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:76459747d447fbe53349461588d71983b7f5033bb09648befce7f96802f57b57 
	I0127 20:05:53.654056   10037 kubeadm.go:322] W0128 04:05:35.958983    1701 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0127 20:05:53.654134   10037 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 20:05:53.654149   10037 cni.go:84] Creating CNI manager for ""
	I0127 20:05:53.654157   10037 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:05:53.713238   10037 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0127 20:05:53.750543   10037 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0127 20:05:53.762941   10037 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0127 20:05:53.774927   10037 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 20:05:53.774994   10037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 20:05:53.774997   10037 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.28.0 minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245 minikube.k8s.io/name=cert-expiration-729000 minikube.k8s.io/updated_at=2023_01_27T20_05_53_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 20:05:53.797009   10037 ops.go:34] apiserver oom_adj: -16
	I0127 20:05:53.888278   10037 kubeadm.go:1073] duration metric: took 113.33693ms to wait for elevateKubeSystemPrivileges.
	I0127 20:05:53.915378   10037 kubeadm.go:403] StartCluster complete in 18.268414695s
	I0127 20:05:53.915399   10037 settings.go:142] acquiring lock: {Name:mk80549a2c3028803e331f0580d721d5d766bd61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:53.915479   10037 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:05:53.916067   10037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/kubeconfig: {Name:mk69cf50f5abd22c9a63615b05ca8d5c80e5d91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:53.916308   10037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 20:05:53.916328   10037 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0127 20:05:53.916371   10037 addons.go:65] Setting storage-provisioner=true in profile "cert-expiration-729000"
	I0127 20:05:53.916371   10037 addons.go:65] Setting default-storageclass=true in profile "cert-expiration-729000"
	I0127 20:05:53.916383   10037 addons.go:227] Setting addon storage-provisioner=true in "cert-expiration-729000"
	W0127 20:05:53.916385   10037 addons.go:236] addon storage-provisioner should already be in state true
	I0127 20:05:53.916385   10037 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-729000"
	I0127 20:05:53.916417   10037 host.go:66] Checking if "cert-expiration-729000" exists ...
	I0127 20:05:53.916458   10037 config.go:180] Loaded profile config "cert-expiration-729000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:05:53.916654   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:53.916672   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:53.916708   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:53.916719   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:53.925102   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52610
	I0127 20:05:53.925581   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52612
	I0127 20:05:53.925625   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:53.926020   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:53.926027   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:53.926045   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:53.926259   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:53.926370   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:53.926377   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:53.926669   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:53.926824   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
	I0127 20:05:53.926952   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:53.926967   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:53.926971   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:53.927074   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
	I0127 20:05:53.934868   10037 addons.go:227] Setting addon default-storageclass=true in "cert-expiration-729000"
	W0127 20:05:53.934885   10037 addons.go:236] addon default-storageclass should already be in state true
	I0127 20:05:53.934915   10037 host.go:66] Checking if "cert-expiration-729000" exists ...
	I0127 20:05:53.935175   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52614
	I0127 20:05:53.935478   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:53.935496   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:53.936236   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:53.937460   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:53.937480   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:53.937720   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:53.937844   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
	I0127 20:05:53.937948   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:53.938075   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
	I0127 20:05:53.939003   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:53.943316   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52616
	I0127 20:05:53.980584   10037 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 20:05:52.262532    9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 20:05:52.262557    9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 20:05:52.762782    9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
	I0127 20:05:52.769733    9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0127 20:05:52.769753    9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0127 20:05:53.263140    9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
	I0127 20:05:53.267540    9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0127 20:05:53.267554    9771 api_server.go:102] status: https://192.168.64.25:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0127 20:05:53.764362    9771 api_server.go:252] Checking apiserver healthz at https://192.168.64.25:8443/healthz ...
	I0127 20:05:53.768632    9771 api_server.go:278] https://192.168.64.25:8443/healthz returned 200:
	ok
	I0127 20:05:53.773903    9771 api_server.go:140] control plane version: v1.17.0
	I0127 20:05:53.773919    9771 api_server.go:130] duration metric: took 6.455929345s to wait for apiserver health ...
	I0127 20:05:53.773925    9771 cni.go:84] Creating CNI manager for ""
	I0127 20:05:53.773936    9771 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:05:53.773948    9771 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:05:53.778226    9771 system_pods.go:59] 4 kube-system pods found
	I0127 20:05:53.778243    9771 system_pods.go:61] "coredns-6955765f44-4kg27" [9b2e9e1b-c463-40b7-a832-cd1b27921930] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0127 20:05:53.778250    9771 system_pods.go:61] "coredns-6955765f44-7ffc4" [ccdf5641-e668-4b3b-9b72-ede33ad90867] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0127 20:05:53.778254    9771 system_pods.go:61] "kube-proxy-nv5hs" [4137d463-f671-42bd-b020-b1bfbcef217e] Pending
	I0127 20:05:53.778258    9771 system_pods.go:61] "storage-provisioner" [d5b749b0-341b-4e5f-adfb-46c6f48adb45] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0127 20:05:53.778262    9771 system_pods.go:74] duration metric: took 4.309335ms to wait for pod list to return data ...
	I0127 20:05:53.778268    9771 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:05:53.780400    9771 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
	I0127 20:05:53.780415    9771 node_conditions.go:123] node cpu capacity is 2
	I0127 20:05:53.780429    9771 node_conditions.go:105] duration metric: took 2.156977ms to run NodePressure ...
	I0127 20:05:53.780442    9771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 20:05:54.007475    9771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 20:05:54.014547    9771 ops.go:34] apiserver oom_adj: -16
	I0127 20:05:54.014556    9771 kubeadm.go:637] restartCluster took 40.034616433s
	I0127 20:05:54.014561    9771 kubeadm.go:403] StartCluster complete in 40.06472281s
	I0127 20:05:54.014574    9771 settings.go:142] acquiring lock: {Name:mk80549a2c3028803e331f0580d721d5d766bd61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:54.014638    9771 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:05:54.015328    9771 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/kubeconfig: {Name:mk69cf50f5abd22c9a63615b05ca8d5c80e5d91b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:05:54.015605    9771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 20:05:54.015621    9771 addons.go:486] enableAddons start: toEnable=map[], additional=[]
	I0127 20:05:54.015691    9771 addons.go:65] Setting storage-provisioner=true in profile "running-upgrade-052000"
	I0127 20:05:54.015691    9771 addons.go:65] Setting default-storageclass=true in profile "running-upgrade-052000"
	I0127 20:05:54.015710    9771 addons.go:227] Setting addon storage-provisioner=true in "running-upgrade-052000"
	I0127 20:05:54.015717    9771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-052000"
	W0127 20:05:54.015719    9771 addons.go:236] addon storage-provisioner should already be in state true
	I0127 20:05:54.015774    9771 host.go:66] Checking if "running-upgrade-052000" exists ...
	I0127 20:05:54.015788    9771 config.go:180] Loaded profile config "running-upgrade-052000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0127 20:05:54.016106    9771 kapi.go:59] client config for running-upgrade-052000: &rest.Config{Host:"https://192.168.64.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 20:05:54.016178    9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.016205    9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.016265    9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.016292    9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.025271    9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52622
	I0127 20:05:54.025809    9771 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.026088    9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52624
	I0127 20:05:54.026293    9771 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.026320    9771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.026550    9771 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.026603    9771 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.026971    9771 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.026986    9771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.027085    9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.027116    9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.027302    9771 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.028806    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
	I0127 20:05:54.030254    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:54.030374    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
	I0127 20:05:54.031448    9771 kapi.go:59] client config for running-upgrade-052000: &rest.Config{Host:"https://192.168.64.25:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/running-upgrade-052000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3235/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 20:05:54.035700    9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52626
	I0127 20:05:54.036101    9771 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.036490    9771 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.036508    9771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.036808    9771 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.036948    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
	I0127 20:05:54.037107    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:54.037246    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
	I0127 20:05:54.038373    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .DriverName
	I0127 20:05:54.042905    9771 addons.go:227] Setting addon default-storageclass=true in "running-upgrade-052000"
	I0127 20:05:54.060613    9771 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0127 20:05:54.060622    9771 addons.go:236] addon default-storageclass should already be in state true
	I0127 20:05:54.060663    9771 host.go:66] Checking if "running-upgrade-052000" exists ...
	I0127 20:05:54.081731    9771 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:05:54.081743    9771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 20:05:54.081781    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHHostname
	I0127 20:05:54.081969    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHPort
	I0127 20:05:54.082032    9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.082062    9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.082099    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHKeyPath
	I0127 20:05:54.082211    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHUsername
	I0127 20:05:54.082730    9771 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/running-upgrade-052000/id_rsa Username:docker}
	I0127 20:05:54.090512    9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52629
	I0127 20:05:54.090965    9771 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.091549    9771 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.091576    9771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.091818    9771 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.092235    9771 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.092264    9771 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.099956    9771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52631
	I0127 20:05:54.100372    9771 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.100766    9771 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.100781    9771 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.101015    9771 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.101137    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetState
	I0127 20:05:54.101246    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:54.101340    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | hyperkit pid from json: 9576
	I0127 20:05:54.102298    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .DriverName
	I0127 20:05:54.102491    9771 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 20:05:54.102502    9771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 20:05:54.102512    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHHostname
	I0127 20:05:54.102615    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHPort
	I0127 20:05:54.102715    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHKeyPath
	I0127 20:05:54.102830    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .GetSSHUsername
	I0127 20:05:54.102917    9771 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/running-upgrade-052000/id_rsa Username:docker}
	I0127 20:05:54.106491    9771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 20:05:54.128516    9771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:05:54.165587    9771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 20:05:54.319172    9771 start.go:908] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
	I0127 20:05:54.434474    9771 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:54.434493    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
	I0127 20:05:54.434647    9771 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:54.434656    9771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:54.434667    9771 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:54.434675    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
	I0127 20:05:54.434818    9771 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:54.434827    9771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:54.434838    9771 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:54.434846    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
	I0127 20:05:54.434936    9771 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:54.434951    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
	I0127 20:05:54.435010    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
	I0127 20:05:54.435089    9771 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:54.435128    9771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:54.435135    9771 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:54.435159    9771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:54.435180    9771 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:54.435223    9771 main.go:141] libmachine: (running-upgrade-052000) Calling .Close
	I0127 20:05:54.435237    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
	I0127 20:05:54.435370    9771 main.go:141] libmachine: (running-upgrade-052000) DBG | Closing plugin on server side
	I0127 20:05:54.435452    9771 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:54.435474    9771 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:53.981052   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.000605   10037 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:05:54.000616   10037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 20:05:54.000634   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:54.000859   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:54.001041   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.001076   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.001082   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:54.001319   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:54.001562   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.001627   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	I0127 20:05:54.002238   10037 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:05:54.002266   10037 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:05:54.005684   10037 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 20:05:54.010175   10037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52619
	I0127 20:05:54.010507   10037 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:05:54.010870   10037 main.go:141] libmachine: Using API Version  1
	I0127 20:05:54.010883   10037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:05:54.011081   10037 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:05:54.011171   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetState
	I0127 20:05:54.011268   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:05:54.011350   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | hyperkit pid from json: 9398
	I0127 20:05:54.012255   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .DriverName
	I0127 20:05:54.012418   10037 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 20:05:54.012423   10037 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 20:05:54.012431   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHHostname
	I0127 20:05:54.012509   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHPort
	I0127 20:05:54.012611   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHKeyPath
	I0127 20:05:54.012702   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .GetSSHUsername
	I0127 20:05:54.012775   10037 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/cert-expiration-729000/id_rsa Username:docker}
	I0127 20:05:54.066197   10037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 20:05:54.087054   10037 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 20:05:54.439646   10037 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-729000" context rescaled to 1 replicas
	I0127 20:05:54.439665   10037 start.go:221] Will wait 6m0s for node &{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:05:54.473718    9771 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0127 20:05:54.514486   10037 out.go:177] * Verifying Kubernetes components...
	I0127 20:05:54.514480    9771 addons.go:488] enableAddons completed in 498.883283ms
	I0127 20:05:54.589753    9771 kapi.go:248] "coredns" deployment in "kube-system" namespace and "running-upgrade-052000" context rescaled to 1 replicas
	I0127 20:05:54.589782    9771 start.go:221] Will wait 6m0s for node &{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:05:54.610530    9771 out.go:177] * Verifying Kubernetes components...
	I0127 20:05:54.668693    9771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:05:54.675145    9771 kubeadm.go:515] skip waiting for components based on config.
	I0127 20:05:54.675159    9771 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:05:54.686532    9771 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
	I0127 20:05:54.686547    9771 node_conditions.go:123] node cpu capacity is 2
	I0127 20:05:54.686554    9771 node_conditions.go:105] duration metric: took 11.391146ms to run NodePressure ...
	I0127 20:05:54.686561    9771 start.go:226] waiting for startup goroutines ...
	I0127 20:05:54.686921    9771 ssh_runner.go:195] Run: rm -f paused
	I0127 20:05:54.726717    9771 start.go:538] kubectl: 1.25.4, cluster: 1.17.0 (minor skew: 8)
	I0127 20:05:54.747501    9771 out.go:177] 
	W0127 20:05:54.784939    9771 out.go:239] ! /usr/local/bin/kubectl is version 1.25.4, which may have incompatibilities with Kubernetes 1.17.0.
	I0127 20:05:54.822646    9771 out.go:177]   - Want kubectl v1.17.0? Try 'minikube kubectl -- get pods -A'
	I0127 20:05:54.880752    9771 out.go:177] * Done! kubectl is now configured to use "running-upgrade-052000" cluster and "" namespace by default
	I0127 20:05:54.588794   10037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:05:54.695327   10037 start.go:908] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS's ConfigMap
	I0127 20:05:55.019014   10037 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:55.019024   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
	I0127 20:05:55.019040   10037 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:55.019049   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
	I0127 20:05:55.019263   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
	I0127 20:05:55.019289   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
	I0127 20:05:55.019297   10037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:55.019305   10037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:55.019310   10037 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:55.019318   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
	I0127 20:05:55.019306   10037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:55.019376   10037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:55.019408   10037 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:55.019420   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
	I0127 20:05:55.019553   10037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:55.019560   10037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:55.019599   10037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:55.019605   10037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:55.019614   10037 main.go:141] libmachine: Making call to close driver server
	I0127 20:05:55.019620   10037 main.go:141] libmachine: (cert-expiration-729000) Calling .Close
	I0127 20:05:55.019633   10037 main.go:141] libmachine: (cert-expiration-729000) DBG | Closing plugin on server side
	I0127 20:05:55.019825   10037 main.go:141] libmachine: Successfully made call to close driver server
	I0127 20:05:55.019831   10037 main.go:141] libmachine: Making call to close connection to plugin binary
	I0127 20:05:55.020129   10037 api_server.go:51] waiting for apiserver process to appear ...
	I0127 20:05:55.041684   10037 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 20:05:55.041785   10037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 20:05:55.115731   10037 addons.go:488] enableAddons completed in 1.199391115s
	I0127 20:05:55.127154   10037 api_server.go:71] duration metric: took 687.485477ms to wait for apiserver process to appear ...
	I0127 20:05:55.127168   10037 api_server.go:87] waiting for apiserver healthz status ...
	I0127 20:05:55.127186   10037 api_server.go:252] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
	I0127 20:05:55.131531   10037 api_server.go:278] https://192.168.64.23:8443/healthz returned 200:
	ok
	I0127 20:05:55.132234   10037 api_server.go:140] control plane version: v1.26.1
	I0127 20:05:55.132244   10037 api_server.go:130] duration metric: took 5.073117ms to wait for apiserver health ...
	I0127 20:05:55.132254   10037 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 20:05:55.137518   10037 system_pods.go:59] 5 kube-system pods found
	I0127 20:05:55.137538   10037 system_pods.go:61] "etcd-cert-expiration-729000" [0bec7128-396f-402f-8948-9bf5db76a8fd] Pending
	I0127 20:05:55.137543   10037 system_pods.go:61] "kube-apiserver-cert-expiration-729000" [6e5cac21-ac6e-4c04-9878-fd9d325fb961] Pending
	I0127 20:05:55.137547   10037 system_pods.go:61] "kube-controller-manager-cert-expiration-729000" [9f8e6296-e586-4599-a39e-3dd88b191593] Pending
	I0127 20:05:55.137551   10037 system_pods.go:61] "kube-scheduler-cert-expiration-729000" [87e82be0-8864-4da6-8593-83145af5e215] Pending
	I0127 20:05:55.137564   10037 system_pods.go:61] "storage-provisioner" [eea6ffb0-41ec-4ee7-baca-63c758359b69] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0127 20:05:55.137568   10037 system_pods.go:74] duration metric: took 5.311376ms to wait for pod list to return data ...
	I0127 20:05:55.137575   10037 kubeadm.go:578] duration metric: took 697.910946ms to wait for : map[apiserver:true system_pods:true] ...
	I0127 20:05:55.137587   10037 node_conditions.go:102] verifying NodePressure condition ...
	I0127 20:05:55.140104   10037 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0127 20:05:55.140118   10037 node_conditions.go:123] node cpu capacity is 2
	I0127 20:05:55.140126   10037 node_conditions.go:105] duration metric: took 2.536474ms to run NodePressure ...
	I0127 20:05:55.140132   10037 start.go:226] waiting for startup goroutines ...
	I0127 20:05:55.140457   10037 ssh_runner.go:195] Run: rm -f paused
	I0127 20:05:55.179678   10037 start.go:538] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0127 20:05:55.200419   10037 out.go:177] * Done! kubectl is now configured to use "cert-expiration-729000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-01-28 04:02:18 UTC, ends at Sat 2023-01-28 04:05:56 UTC. --
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.306380277Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5fac51758de637db5711bdcade68115305c238d1b6062536e6b473f11ee970f8 pid=2034 runtime=io.containerd.runc.v2
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474321665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474448207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474506449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.474753684Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/09f9be266e60386919d6e9174c77ec460072c4907c415e904e1cb3da22c0656b pid=2068 runtime=io.containerd.runc.v2
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547667876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547708778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547716466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.547815240Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e100c9dfaa645a97d0a481e5cd4a14603764b569114d3c7e5837834abbee083f pid=2103 runtime=io.containerd.runc.v2
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880236211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880473337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880559102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.880843667Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e90861c322335deda0553b14a4f8943dab0937810203b8ee9bc4978e9ddb011c pid=2163 runtime=io.containerd.runc.v2
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889441115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889809966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.889872917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:45 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:45.890642983Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ee29e39c94dd8fd9ef9118d80751491f39701fc8750a36ee7daff2642c1ddeb8 pid=2190 runtime=io.containerd.runc.v2
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071856013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071916984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.071926404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.072460166Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fe78b76dec737fec3cd6b49fffe5e96c82d8d185f9a058d934c83ca9c08f0802 pid=2238 runtime=io.containerd.runc.v2
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486889975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486962981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.486973213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 28 04:05:46 cert-expiration-729000 dockerd[1360]: time="2023-01-28T04:05:46.487298302Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef2d7d5804f0b250b1d706ad02493a4bd07a8ca2d098b17780b5de98262d4acb pid=2314 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	ef2d7d5804f0b       655493523f607       10 seconds ago      Running             kube-scheduler            0                   fe78b76dec737
	ee29e39c94dd8       e9c08e11b07f6       11 seconds ago      Running             kube-controller-manager   0                   09f9be266e603
	e90861c322335       deb04688c4a35       11 seconds ago      Running             kube-apiserver            0                   e100c9dfaa645
	5fac51758de63       fce326961ae2d       11 seconds ago      Running             etcd                      0                   b2a84ed55df97
	
	* 
	* ==> describe nodes <==
	* Name:               cert-expiration-729000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-expiration-729000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a22b9432724c1a7c0bfc1f92a18db163006c245
	                    minikube.k8s.io/name=cert-expiration-729000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_27T20_05_53_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 04:05:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-expiration-729000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 04:05:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 04:05:55 +0000   Sat, 28 Jan 2023 04:05:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 04:05:55 +0000   Sat, 28 Jan 2023 04:05:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 04:05:55 +0000   Sat, 28 Jan 2023 04:05:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 04:05:55 +0000   Sat, 28 Jan 2023 04:05:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.64.23
	  Hostname:    cert-expiration-729000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017572Ki
	  pods:               110
	System Info:
	  Machine ID:                 139e4a18d81a4104bb2f65dd3a7d7d81
	  System UUID:                8ab711ed-0000-0000-8fe6-149d997fca88
	  Boot ID:                    103ebed0-4a1d-40ed-9dd1-e3571ca42c78
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-cert-expiration-729000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-cert-expiration-729000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-cert-expiration-729000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-cert-expiration-729000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (5%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node cert-expiration-729000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node cert-expiration-729000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node cert-expiration-729000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                1s    kubelet  Node cert-expiration-729000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.972526] systemd-fstab-generator[531]: Ignoring "noauto" for root device
	[  +0.088260] systemd-fstab-generator[542]: Ignoring "noauto" for root device
	[  +5.501415] systemd-fstab-generator[730]: Ignoring "noauto" for root device
	[  +1.234658] kauditd_printk_skb: 16 callbacks suppressed
	[  +0.214029] systemd-fstab-generator[892]: Ignoring "noauto" for root device
	[  +0.202014] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +0.090948] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.099109] systemd-fstab-generator[951]: Ignoring "noauto" for root device
	[  +1.312105] systemd-fstab-generator[1100]: Ignoring "noauto" for root device
	[  +0.081311] systemd-fstab-generator[1111]: Ignoring "noauto" for root device
	[  +0.097154] systemd-fstab-generator[1122]: Ignoring "noauto" for root device
	[  +0.088567] systemd-fstab-generator[1133]: Ignoring "noauto" for root device
	[Jan28 04:05] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +0.167442] systemd-fstab-generator[1321]: Ignoring "noauto" for root device
	[  +0.092339] systemd-fstab-generator[1332]: Ignoring "noauto" for root device
	[  +0.105251] systemd-fstab-generator[1345]: Ignoring "noauto" for root device
	[  +1.175498] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.136199] systemd-fstab-generator[1492]: Ignoring "noauto" for root device
	[  +0.088589] systemd-fstab-generator[1503]: Ignoring "noauto" for root device
	[  +0.079662] systemd-fstab-generator[1514]: Ignoring "noauto" for root device
	[  +0.092603] systemd-fstab-generator[1525]: Ignoring "noauto" for root device
	[  +4.271616] systemd-fstab-generator[1776]: Ignoring "noauto" for root device
	[  +0.421234] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.790743] systemd-fstab-generator[2536]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [5fac51758de6] <==
	* {"level":"info","ts":"2023-01-28T04:05:45.727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 switched to configuration voters=(3857958311015864865)"}
	{"level":"info","ts":"2023-01-28T04:05:45.727Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","added-peer-id":"358a38a4be5dda21","added-peer-peer-urls":["https://192.168.64.23:2380"]}
	{"level":"info","ts":"2023-01-28T04:05:45.739Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T04:05:45.739Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T04:05:45.740Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T04:05:45.742Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.64.23:2380"}
	{"level":"info","ts":"2023-01-28T04:05:45.742Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.64.23:2380"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 1"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 1"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 2"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 2"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 2"}
	{"level":"info","ts":"2023-01-28T04:05:46.110Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 2"}
	{"level":"info","ts":"2023-01-28T04:05:46.115Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:cert-expiration-729000 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T04:05:46.115Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
	{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T04:05:46.116Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T04:05:46.117Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T04:05:46.119Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T04:05:46.123Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T04:05:46.124Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  04:05:57 up 3 min,  0 users,  load average: 0.42, 0.12, 0.03
	Linux cert-expiration-729000 5.10.57 #1 SMP Sat Jan 28 02:15:18 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e90861c32233] <==
	* I0128 04:05:48.595703       1 controller.go:615] quota admission added evaluator for: namespaces
	I0128 04:05:48.614923       1 cache.go:39] Caches are synced for autoregister controller
	I0128 04:05:48.615279       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 04:05:48.616243       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 04:05:48.616973       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 04:05:48.617049       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 04:05:48.617136       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0128 04:05:48.619213       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0128 04:05:48.619306       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0128 04:05:48.644426       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0128 04:05:48.656786       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0128 04:05:49.319314       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 04:05:49.520112       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0128 04:05:49.527916       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0128 04:05:49.527949       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 04:05:49.839985       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 04:05:49.861833       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0128 04:05:49.910412       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0128 04:05:49.916919       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.64.23]
	I0128 04:05:49.917742       1 controller.go:615] quota admission added evaluator for: endpoints
	I0128 04:05:49.920179       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0128 04:05:50.574873       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 04:05:53.660454       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 04:05:53.667353       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0128 04:05:53.675800       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [ee29e39c94dd] <==
	* I0128 04:05:50.582791       1 cronjob_controllerv2.go:137] "Starting cronjob controller v2"
	I0128 04:05:50.583297       1 shared_informer.go:273] Waiting for caches to sync for cronjob
	I0128 04:05:50.588319       1 controllermanager.go:622] Started "csrapproving"
	I0128 04:05:50.588575       1 certificate_controller.go:112] Starting certificate controller "csrapproving"
	I0128 04:05:50.588658       1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving
	I0128 04:05:50.590188       1 controllermanager.go:622] Started "csrcleaner"
	I0128 04:05:50.590199       1 cleaner.go:82] Starting CSR cleaner controller
	I0128 04:05:50.596558       1 node_lifecycle_controller.go:492] Controller will reconcile labels.
	I0128 04:05:50.596604       1 controllermanager.go:622] Started "nodelifecycle"
	I0128 04:05:50.596833       1 node_lifecycle_controller.go:527] Sending events to api server.
	I0128 04:05:50.596866       1 node_lifecycle_controller.go:538] Starting node controller
	I0128 04:05:50.596872       1 shared_informer.go:273] Waiting for caches to sync for taint
	I0128 04:05:50.602075       1 controllermanager.go:622] Started "podgc"
	I0128 04:05:50.602279       1 gc_controller.go:102] Starting GC controller
	I0128 04:05:50.602307       1 shared_informer.go:273] Waiting for caches to sync for GC
	I0128 04:05:50.607603       1 controllermanager.go:622] Started "serviceaccount"
	I0128 04:05:50.607801       1 serviceaccounts_controller.go:111] Starting service account controller
	I0128 04:05:50.607830       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0128 04:05:50.613104       1 controllermanager.go:622] Started "replicaset"
	I0128 04:05:50.613413       1 replica_set.go:201] Starting replicaset controller
	I0128 04:05:50.613421       1 shared_informer.go:273] Waiting for caches to sync for ReplicaSet
	I0128 04:05:50.619157       1 controllermanager.go:622] Started "persistentvolume-binder"
	I0128 04:05:50.619699       1 pv_controller_base.go:318] Starting persistent volume controller
	I0128 04:05:50.619726       1 shared_informer.go:273] Waiting for caches to sync for persistent volume
	I0128 04:05:50.670200       1 shared_informer.go:280] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [ef2d7d5804f0] <==
	* W0128 04:05:48.604286       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0128 04:05:48.604294       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0128 04:05:48.605697       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0128 04:05:48.605753       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0128 04:05:48.605933       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0128 04:05:48.605984       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0128 04:05:48.606062       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0128 04:05:48.606108       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0128 04:05:48.606151       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0128 04:05:48.606196       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0128 04:05:48.606267       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0128 04:05:48.606315       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0128 04:05:49.508724       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0128 04:05:49.508800       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0128 04:05:49.518012       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0128 04:05:49.518085       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0128 04:05:49.580469       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0128 04:05:49.580567       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0128 04:05:49.613964       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0128 04:05:49.614001       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0128 04:05:49.651546       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0128 04:05:49.651616       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0128 04:05:49.700131       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0128 04:05:49.700167       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0128 04:05:49.995064       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-28 04:02:18 UTC, ends at Sat 2023-01-28 04:05:57 UTC. --
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064103    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-k8s-certs\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064191    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bf4ae35f72ae3f906ee969eaec31c6b5-kubeconfig\") pod \"kube-scheduler-cert-expiration-729000\" (UID: \"bf4ae35f72ae3f906ee969eaec31c6b5\") " pod="kube-system/kube-scheduler-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064213    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/777e73f7ad1ac012089ecaaa2fcbabab-etcd-data\") pod \"etcd-cert-expiration-729000\" (UID: \"777e73f7ad1ac012089ecaaa2fcbabab\") " pod="kube-system/etcd-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064232    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-ca-certs\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064252    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-ca-certs\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064313    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-flexvolume-dir\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064400    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-kubeconfig\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064424    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e1bcb52bd06bfd5e74451adc84cc53e8-usr-share-ca-certificates\") pod \"kube-controller-manager-cert-expiration-729000\" (UID: \"e1bcb52bd06bfd5e74451adc84cc53e8\") " pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064445    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/777e73f7ad1ac012089ecaaa2fcbabab-etcd-certs\") pod \"etcd-cert-expiration-729000\" (UID: \"777e73f7ad1ac012089ecaaa2fcbabab\") " pod="kube-system/etcd-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064463    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-k8s-certs\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.064528    2556 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f7b59a0f3e7bb502743e8d8e38b7d9a-usr-share-ca-certificates\") pod \"kube-apiserver-cert-expiration-729000\" (UID: \"3f7b59a0f3e7bb502743e8d8e38b7d9a\") " pod="kube-system/kube-apiserver-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: E0128 04:05:54.149999    2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-expiration-729000\" already exists" pod="kube-system/kube-scheduler-cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.551571    2556 kubelet_node_status.go:108] "Node was previously registered" node="cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.551670    2556 kubelet_node_status.go:73] "Successfully registered node" node="cert-expiration-729000"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.747578    2556 apiserver.go:52] "Watching apiserver"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.963777    2556 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 28 04:05:54 cert-expiration-729000 kubelet[2556]: I0128 04:05:54.979374    2556 reconciler.go:41] "Reconciler: start to sync state"
	Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: I0128 04:05:55.283585    2556 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.350447    2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-cert-expiration-729000\" already exists" pod="kube-system/kube-apiserver-cert-expiration-729000"
	Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.549473    2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-cert-expiration-729000\" already exists" pod="kube-system/kube-scheduler-cert-expiration-729000"
	Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: E0128 04:05:55.747387    2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"etcd-cert-expiration-729000\" already exists" pod="kube-system/etcd-cert-expiration-729000"
	Jan 28 04:05:55 cert-expiration-729000 kubelet[2556]: I0128 04:05:55.946415    2556 request.go:690] Waited for 1.08926386s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: E0128 04:05:56.002997    2556 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-cert-expiration-729000\" already exists" pod="kube-system/kube-controller-manager-cert-expiration-729000"
	Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: I0128 04:05:56.547651    2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-expiration-729000" podStartSLOduration=3.5476181650000003 pod.CreationTimestamp="2023-01-28 04:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:05:56.210086978 +0000 UTC m=+2.565195159" watchObservedRunningTime="2023-01-28 04:05:56.547618165 +0000 UTC m=+2.902726329"
	Jan 28 04:05:56 cert-expiration-729000 kubelet[2556]: I0128 04:05:56.996747    2556 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-expiration-729000" podStartSLOduration=3.996717603 pod.CreationTimestamp="2023-01-28 04:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-01-28 04:05:56.548333387 +0000 UTC m=+2.903441558" watchObservedRunningTime="2023-01-28 04:05:56.996717603 +0000 UTC m=+3.351825783"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-expiration-729000 -n cert-expiration-729000
helpers_test.go:261: (dbg) Run:  kubectl --context cert-expiration-729000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: kube-controller-manager-cert-expiration-729000 storage-provisioner
helpers_test.go:274: ======> post-mortem[TestCertExpiration]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner: exit status 1 (47.282937ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kube-controller-manager-cert-expiration-729000" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context cert-expiration-729000 describe pod kube-controller-manager-cert-expiration-729000 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "cert-expiration-729000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-729000

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-729000: (5.274928332s)
--- FAIL: TestCertExpiration (232.10s)

                                                
                                    
x
+
TestErrorSpam/setup (39.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-259000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-259000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 --driver=hyperkit : (39.384446808s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0"
error_spam_test.go:110: minikube stdout:
* [nospam-259000] minikube v1.28.0 on Darwin 13.2
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the hyperkit driver based on user configuration
* Starting control plane node nospam-259000 in cluster nospam-259000
* Creating hyperkit VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-259000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
--- FAIL: TestErrorSpam/setup (39.38s)

                                                
                                    
x
+
TestImageBuild (86.89s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p image-051000 --driver=hyperkit 
E0127 19:40:30.972768    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
image_test.go:40: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p image-051000 --driver=hyperkit : exit status 90 (1m21.614348461s)

                                                
                                                
-- stdout --
	* [image-051000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node image-051000 in cluster image-051000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:43: failed to start minikube with args: "out/minikube-darwin-amd64 start -p image-051000 --driver=hyperkit " : exit status 90
helpers_test.go:175: Cleaning up "image-051000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p image-051000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p image-051000: (5.280036687s)
--- FAIL: TestImageBuild (86.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (20.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : exit status 90 (20.814809076s)

                                                
                                                
-- stdout --
	* [auto-035000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting control plane node auto-035000 in cluster auto-035000
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:10:26.564793   10821 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:10:26.565058   10821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:10:26.565064   10821 out.go:309] Setting ErrFile to fd 2...
	I0127 20:10:26.565068   10821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:10:26.565184   10821 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:10:26.565721   10821 out.go:303] Setting JSON to false
	I0127 20:10:26.585736   10821 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4201,"bootTime":1674874825,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 20:10:26.585835   10821 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:10:26.608351   10821 out.go:177] * [auto-035000] minikube v1.28.0 on Darwin 13.2
	I0127 20:10:26.655865   10821 notify.go:220] Checking for updates...
	I0127 20:10:26.720839   10821 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:10:26.783943   10821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:10:26.825936   10821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:10:26.913070   10821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:10:26.975808   10821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 20:10:27.022860   10821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:10:27.044892   10821 config.go:180] Loaded profile config "NoKubernetes-182000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0127 20:10:27.045014   10821 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:10:27.110961   10821 out.go:177] * Using the hyperkit driver based on user configuration
	I0127 20:10:27.133838   10821 start.go:296] selected driver: hyperkit
	I0127 20:10:27.133866   10821 start.go:840] validating driver "hyperkit" against <nil>
	I0127 20:10:27.133881   10821 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:10:27.136543   10821 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:10:27.136655   10821 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 20:10:27.143333   10821 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 20:10:27.146588   10821 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:10:27.146608   10821 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 20:10:27.146647   10821 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 20:10:27.146801   10821 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:10:27.146830   10821 cni.go:84] Creating CNI manager for ""
	I0127 20:10:27.146847   10821 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:10:27.146852   10821 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0127 20:10:27.146865   10821 start_flags.go:319] config:
	{Name:auto-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:auto-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:10:27.146961   10821 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:10:27.188681   10821 out.go:177] * Starting control plane node auto-035000 in cluster auto-035000
	I0127 20:10:27.209884   10821 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:10:27.209947   10821 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 20:10:27.209968   10821 cache.go:57] Caching tarball of preloaded images
	I0127 20:10:27.210105   10821 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:10:27.210114   10821 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0127 20:10:27.210217   10821 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/auto-035000/config.json ...
	I0127 20:10:27.210248   10821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/auto-035000/config.json: {Name:mke545a8520bb64ac2264bd19e643dbcdbaa09cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 20:10:27.210482   10821 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:10:27.210509   10821 start.go:364] acquiring machines lock for auto-035000: {Name:mk69c04a34b14d26e3f74e414bcb566a33d5b215 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 20:10:27.210558   10821 start.go:368] acquired machines lock for "auto-035000" in 39.89µs
	I0127 20:10:27.210581   10821 start.go:93] Provisioning new machine with config: &{Name:auto-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.26.1 ClusterName:auto-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 20:10:27.210643   10821 start.go:125] createHost starting for "" (driver="hyperkit")
	I0127 20:10:27.235838   10821 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0127 20:10:27.236124   10821 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:10:27.236165   10821 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:10:27.243436   10821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53238
	I0127 20:10:27.243842   10821 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:10:27.244236   10821 main.go:141] libmachine: Using API Version  1
	I0127 20:10:27.244256   10821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:10:27.244464   10821 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:10:27.244578   10821 main.go:141] libmachine: (auto-035000) Calling .GetMachineName
	I0127 20:10:27.244671   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:27.244771   10821 start.go:159] libmachine.API.Create for "auto-035000" (driver="hyperkit")
	I0127 20:10:27.244794   10821 client.go:168] LocalClient.Create starting
	I0127 20:10:27.244827   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem
	I0127 20:10:27.244876   10821 main.go:141] libmachine: Decoding PEM data...
	I0127 20:10:27.244889   10821 main.go:141] libmachine: Parsing certificate...
	I0127 20:10:27.244944   10821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem
	I0127 20:10:27.244977   10821 main.go:141] libmachine: Decoding PEM data...
	I0127 20:10:27.244990   10821 main.go:141] libmachine: Parsing certificate...
	I0127 20:10:27.245003   10821 main.go:141] libmachine: Running pre-create checks...
	I0127 20:10:27.245016   10821 main.go:141] libmachine: (auto-035000) Calling .PreCreateCheck
	I0127 20:10:27.245091   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:27.245278   10821 main.go:141] libmachine: (auto-035000) Calling .GetConfigRaw
	I0127 20:10:27.245695   10821 main.go:141] libmachine: Creating machine...
	I0127 20:10:27.245704   10821 main.go:141] libmachine: (auto-035000) Calling .Create
	I0127 20:10:27.245778   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:27.245909   10821 main.go:141] libmachine: (auto-035000) DBG | I0127 20:10:27.245774   10835 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 20:10:27.245977   10821 main.go:141] libmachine: (auto-035000) Downloading /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15565-3235/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso...
	I0127 20:10:27.443861   10821 main.go:141] libmachine: (auto-035000) DBG | I0127 20:10:27.443773   10835 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/id_rsa...
	I0127 20:10:27.511367   10821 main.go:141] libmachine: (auto-035000) DBG | I0127 20:10:27.511278   10835 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/auto-035000.rawdisk...
	I0127 20:10:27.511379   10821 main.go:141] libmachine: (auto-035000) DBG | Writing magic tar header
	I0127 20:10:27.511387   10821 main.go:141] libmachine: (auto-035000) DBG | Writing SSH key tar header
	I0127 20:10:27.512066   10821 main.go:141] libmachine: (auto-035000) DBG | I0127 20:10:27.512009   10835 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000 ...
	I0127 20:10:27.657493   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:27.657508   10821 main.go:141] libmachine: (auto-035000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/hyperkit.pid
	I0127 20:10:27.657520   10821 main.go:141] libmachine: (auto-035000) DBG | Using UUID b21b38b4-9ec1-11ed-88e2-149d997fca88
	I0127 20:10:27.678190   10821 main.go:141] libmachine: (auto-035000) DBG | Generated MAC fa:b:55:1c:5b:fd
	I0127 20:10:27.678210   10821 main.go:141] libmachine: (auto-035000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-035000
	I0127 20:10:27.678248   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b21b38b4-9ec1-11ed-88e2-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00024a840)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0127 20:10:27.678274   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b21b38b4-9ec1-11ed-88e2-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00024a840)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0127 20:10:27.678336   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b21b38b4-9ec1-11ed-88e2-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/auto-035000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/bzimage,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/in
itrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-035000"}
	I0127 20:10:27.678372   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b21b38b4-9ec1-11ed-88e2-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/auto-035000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/bzimage,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0
noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=auto-035000"
	I0127 20:10:27.678385   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0127 20:10:27.679714   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 DEBUG: hyperkit: Pid is 10841
	I0127 20:10:27.680041   10821 main.go:141] libmachine: (auto-035000) DBG | Attempt 0
	I0127 20:10:27.680052   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:27.680152   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:27.681072   10821 main.go:141] libmachine: (auto-035000) DBG | Searching for fa:b:55:1c:5b:fd in /var/db/dhcpd_leases ...
	I0127 20:10:27.681126   10821 main.go:141] libmachine: (auto-035000) DBG | Found 29 entries in /var/db/dhcpd_leases!
	I0127 20:10:27.681161   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:4e:ed:63:62:fc:74 ID:1,4e:ed:63:62:fc:74 Lease:0x63d5f1ac}
	I0127 20:10:27.681175   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:92:e8:e3:c4:f2:c8 ID:1,92:e8:e3:c4:f2:c8 Lease:0x63d4a00f}
	I0127 20:10:27.681185   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:0:b5:81:6b:25 ID:1,42:0:b5:81:6b:25 Lease:0x63d5f153}
	I0127 20:10:27.681193   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:e6:e7:10:11:60:d5 ID:1,e6:e7:10:11:60:d5 Lease:0x63d5f11f}
	I0127 20:10:27.681210   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:9a:fd:c9:ea:0:f4 ID:1,9a:fd:c9:ea:0:f4 Lease:0x63d5f0fa}
	I0127 20:10:27.681220   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:2:53:13:2c:ff:14 ID:1,2:53:13:2c:ff:14 Lease:0x63d5f010}
	I0127 20:10:27.681228   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:c6:3e:49:a7:8f:45 ID:1,c6:3e:49:a7:8f:45 Lease:0x63d49e7a}
	I0127 20:10:27.681237   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6a:fe:f6:ef:e1:6e ID:1,6a:fe:f6:ef:e1:6e Lease:0x63d5efcb}
	I0127 20:10:27.681246   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:6a:2a:9:fd:ff:ea ID:1,6a:2a:9:fd:ff:ea Lease:0x63d5efaa}
	I0127 20:10:27.681253   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:8e:af:39:6e:2d:7e ID:1,8e:af:39:6e:2d:7e Lease:0x63d5ef9f}
	I0127 20:10:27.681270   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:26:9d:91:15:d7:c0 ID:1,26:9d:91:15:d7:c0 Lease:0x63d49e1e}
	I0127 20:10:27.681283   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:b2:e4:7a:80:49:eb ID:1,b2:e4:7a:80:49:eb Lease:0x63d5ef63}
	I0127 20:10:27.681292   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:82:e5:85:4a:11:dc ID:1,82:e5:85:4a:11:dc Lease:0x63d5ef18}
	I0127 20:10:27.681302   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:5e:9f:11:8c:b2:80 ID:1,5e:9f:11:8c:b2:80 Lease:0x63d5eea6}
	I0127 20:10:27.681310   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:6e:a5:9:d2:3e:da ID:1,6e:a5:9:d2:3e:da Lease:0x63d5ee59}
	I0127 20:10:27.681319   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:6a:76:72:7a:17:3c ID:1,6a:76:72:7a:17:3c Lease:0x63d49c4f}
	I0127 20:10:27.681333   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ba:21:ea:d1:67:55 ID:1,ba:21:ea:d1:67:55 Lease:0x63d49bc2}
	I0127 20:10:27.681346   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:e2:9a:64:74:42:a7 ID:1,e2:9a:64:74:42:a7 Lease:0x63d5ed8f}
	I0127 20:10:27.681360   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:c6:ea:3a:34:4:15 ID:1,c6:ea:3a:34:4:15 Lease:0x63d5ed5c}
	I0127 20:10:27.681371   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:de:3e:8e:16:8c:c ID:1,de:3e:8e:16:8c:c Lease:0x63d49a8a}
	I0127 20:10:27.681383   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:ea:c4:22:14:f6:79 ID:1,ea:c4:22:14:f6:79 Lease:0x63d49a75}
	I0127 20:10:27.681397   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:be:1f:dd:9:d4:b2 ID:1,be:1f:dd:9:d4:b2 Lease:0x63d49a4f}
	I0127 20:10:27.681409   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:9e:12:e:46:1e:dc ID:1,9e:12:e:46:1e:dc Lease:0x63d5eb82}
	I0127 20:10:27.681419   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:a:f3:2f:66:84:69 ID:1,a:f3:2f:66:84:69 Lease:0x63d5eb41}
	I0127 20:10:27.681445   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:8a:32:b3:dc:47:42 ID:1,8a:32:b3:dc:47:42 Lease:0x63d5eaca}
	I0127 20:10:27.681460   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:ca:3a:99:12:85:7a ID:1,ca:3a:99:12:85:7a Lease:0x63d5ea6e}
	I0127 20:10:27.681473   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:a6:93:12:50:62:df ID:1,a6:93:12:50:62:df Lease:0x63d5e969}
	I0127 20:10:27.681485   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:86:ae:74:13:81:48 ID:1,86:ae:74:13:81:48 Lease:0x63d497de}
	I0127 20:10:27.681496   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4e:8b:35:2e:2a:1c ID:1,4e:8b:35:2e:2a:1c Lease:0x63d5e860}
	I0127 20:10:27.686572   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0127 20:10:27.696380   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0127 20:10:27.697002   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:10:27.697013   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:10:27.697026   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:10:27.697039   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:10:28.075515   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0127 20:10:28.075530   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0127 20:10:28.179526   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:10:28.179551   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:10:28.179565   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:10:28.179577   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:10:28.180419   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0127 20:10:28.180430   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0127 20:10:29.682481   10821 main.go:141] libmachine: (auto-035000) DBG | Attempt 1
	I0127 20:10:29.682493   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:29.682596   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:29.683913   10821 main.go:141] libmachine: (auto-035000) DBG | Searching for fa:b:55:1c:5b:fd in /var/db/dhcpd_leases ...
	I0127 20:10:29.683987   10821 main.go:141] libmachine: (auto-035000) DBG | Found 29 entries in /var/db/dhcpd_leases!
	I0127 20:10:29.683997   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:4e:ed:63:62:fc:74 ID:1,4e:ed:63:62:fc:74 Lease:0x63d4a034}
	I0127 20:10:29.684007   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:92:e8:e3:c4:f2:c8 ID:1,92:e8:e3:c4:f2:c8 Lease:0x63d4a00f}
	I0127 20:10:29.684016   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:0:b5:81:6b:25 ID:1,42:0:b5:81:6b:25 Lease:0x63d5f153}
	I0127 20:10:29.684024   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:e6:e7:10:11:60:d5 ID:1,e6:e7:10:11:60:d5 Lease:0x63d5f11f}
	I0127 20:10:29.684033   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:9a:fd:c9:ea:0:f4 ID:1,9a:fd:c9:ea:0:f4 Lease:0x63d5f0fa}
	I0127 20:10:29.684041   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:2:53:13:2c:ff:14 ID:1,2:53:13:2c:ff:14 Lease:0x63d5f010}
	I0127 20:10:29.684047   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:c6:3e:49:a7:8f:45 ID:1,c6:3e:49:a7:8f:45 Lease:0x63d49e7a}
	I0127 20:10:29.684054   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6a:fe:f6:ef:e1:6e ID:1,6a:fe:f6:ef:e1:6e Lease:0x63d5efcb}
	I0127 20:10:29.684062   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:6a:2a:9:fd:ff:ea ID:1,6a:2a:9:fd:ff:ea Lease:0x63d5efaa}
	I0127 20:10:29.684081   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:8e:af:39:6e:2d:7e ID:1,8e:af:39:6e:2d:7e Lease:0x63d5ef9f}
	I0127 20:10:29.684095   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:26:9d:91:15:d7:c0 ID:1,26:9d:91:15:d7:c0 Lease:0x63d49e1e}
	I0127 20:10:29.684108   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:b2:e4:7a:80:49:eb ID:1,b2:e4:7a:80:49:eb Lease:0x63d5ef63}
	I0127 20:10:29.684123   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:82:e5:85:4a:11:dc ID:1,82:e5:85:4a:11:dc Lease:0x63d5ef18}
	I0127 20:10:29.684131   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:5e:9f:11:8c:b2:80 ID:1,5e:9f:11:8c:b2:80 Lease:0x63d5eea6}
	I0127 20:10:29.684141   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:6e:a5:9:d2:3e:da ID:1,6e:a5:9:d2:3e:da Lease:0x63d5ee59}
	I0127 20:10:29.684151   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:6a:76:72:7a:17:3c ID:1,6a:76:72:7a:17:3c Lease:0x63d49c4f}
	I0127 20:10:29.684159   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ba:21:ea:d1:67:55 ID:1,ba:21:ea:d1:67:55 Lease:0x63d49bc2}
	I0127 20:10:29.684168   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:e2:9a:64:74:42:a7 ID:1,e2:9a:64:74:42:a7 Lease:0x63d5ed8f}
	I0127 20:10:29.684175   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:c6:ea:3a:34:4:15 ID:1,c6:ea:3a:34:4:15 Lease:0x63d5ed5c}
	I0127 20:10:29.684183   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:de:3e:8e:16:8c:c ID:1,de:3e:8e:16:8c:c Lease:0x63d49a8a}
	I0127 20:10:29.684190   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:ea:c4:22:14:f6:79 ID:1,ea:c4:22:14:f6:79 Lease:0x63d49a75}
	I0127 20:10:29.684198   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:be:1f:dd:9:d4:b2 ID:1,be:1f:dd:9:d4:b2 Lease:0x63d49a4f}
	I0127 20:10:29.684204   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:9e:12:e:46:1e:dc ID:1,9e:12:e:46:1e:dc Lease:0x63d5eb82}
	I0127 20:10:29.684213   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:a:f3:2f:66:84:69 ID:1,a:f3:2f:66:84:69 Lease:0x63d5eb41}
	I0127 20:10:29.684220   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:8a:32:b3:dc:47:42 ID:1,8a:32:b3:dc:47:42 Lease:0x63d5eaca}
	I0127 20:10:29.684229   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:ca:3a:99:12:85:7a ID:1,ca:3a:99:12:85:7a Lease:0x63d5ea6e}
	I0127 20:10:29.684237   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:a6:93:12:50:62:df ID:1,a6:93:12:50:62:df Lease:0x63d5e969}
	I0127 20:10:29.684245   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:86:ae:74:13:81:48 ID:1,86:ae:74:13:81:48 Lease:0x63d497de}
	I0127 20:10:29.684254   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4e:8b:35:2e:2a:1c ID:1,4e:8b:35:2e:2a:1c Lease:0x63d5e860}
	I0127 20:10:31.685504   10821 main.go:141] libmachine: (auto-035000) DBG | Attempt 2
	I0127 20:10:31.685519   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:31.685589   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:31.686420   10821 main.go:141] libmachine: (auto-035000) DBG | Searching for fa:b:55:1c:5b:fd in /var/db/dhcpd_leases ...
	I0127 20:10:31.686477   10821 main.go:141] libmachine: (auto-035000) DBG | Found 29 entries in /var/db/dhcpd_leases!
	I0127 20:10:31.686489   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:4e:ed:63:62:fc:74 ID:1,4e:ed:63:62:fc:74 Lease:0x63d4a034}
	I0127 20:10:31.686498   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:92:e8:e3:c4:f2:c8 ID:1,92:e8:e3:c4:f2:c8 Lease:0x63d4a00f}
	I0127 20:10:31.686506   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:0:b5:81:6b:25 ID:1,42:0:b5:81:6b:25 Lease:0x63d5f153}
	I0127 20:10:31.686532   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:e6:e7:10:11:60:d5 ID:1,e6:e7:10:11:60:d5 Lease:0x63d5f11f}
	I0127 20:10:31.686545   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:9a:fd:c9:ea:0:f4 ID:1,9a:fd:c9:ea:0:f4 Lease:0x63d5f0fa}
	I0127 20:10:31.686553   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:2:53:13:2c:ff:14 ID:1,2:53:13:2c:ff:14 Lease:0x63d5f010}
	I0127 20:10:31.686562   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:c6:3e:49:a7:8f:45 ID:1,c6:3e:49:a7:8f:45 Lease:0x63d49e7a}
	I0127 20:10:31.686570   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6a:fe:f6:ef:e1:6e ID:1,6a:fe:f6:ef:e1:6e Lease:0x63d5efcb}
	I0127 20:10:31.686579   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:6a:2a:9:fd:ff:ea ID:1,6a:2a:9:fd:ff:ea Lease:0x63d5efaa}
	I0127 20:10:31.686593   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:8e:af:39:6e:2d:7e ID:1,8e:af:39:6e:2d:7e Lease:0x63d5ef9f}
	I0127 20:10:31.686606   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:26:9d:91:15:d7:c0 ID:1,26:9d:91:15:d7:c0 Lease:0x63d49e1e}
	I0127 20:10:31.686614   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:b2:e4:7a:80:49:eb ID:1,b2:e4:7a:80:49:eb Lease:0x63d5ef63}
	I0127 20:10:31.686623   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:82:e5:85:4a:11:dc ID:1,82:e5:85:4a:11:dc Lease:0x63d5ef18}
	I0127 20:10:31.686642   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:5e:9f:11:8c:b2:80 ID:1,5e:9f:11:8c:b2:80 Lease:0x63d5eea6}
	I0127 20:10:31.686652   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:6e:a5:9:d2:3e:da ID:1,6e:a5:9:d2:3e:da Lease:0x63d5ee59}
	I0127 20:10:31.686660   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:6a:76:72:7a:17:3c ID:1,6a:76:72:7a:17:3c Lease:0x63d49c4f}
	I0127 20:10:31.686668   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ba:21:ea:d1:67:55 ID:1,ba:21:ea:d1:67:55 Lease:0x63d49bc2}
	I0127 20:10:31.686679   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:e2:9a:64:74:42:a7 ID:1,e2:9a:64:74:42:a7 Lease:0x63d5ed8f}
	I0127 20:10:31.686688   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:c6:ea:3a:34:4:15 ID:1,c6:ea:3a:34:4:15 Lease:0x63d5ed5c}
	I0127 20:10:31.686698   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:de:3e:8e:16:8c:c ID:1,de:3e:8e:16:8c:c Lease:0x63d49a8a}
	I0127 20:10:31.686708   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:ea:c4:22:14:f6:79 ID:1,ea:c4:22:14:f6:79 Lease:0x63d49a75}
	I0127 20:10:31.686716   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:be:1f:dd:9:d4:b2 ID:1,be:1f:dd:9:d4:b2 Lease:0x63d49a4f}
	I0127 20:10:31.686724   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:9e:12:e:46:1e:dc ID:1,9e:12:e:46:1e:dc Lease:0x63d5eb82}
	I0127 20:10:31.686733   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:a:f3:2f:66:84:69 ID:1,a:f3:2f:66:84:69 Lease:0x63d5eb41}
	I0127 20:10:31.686741   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:8a:32:b3:dc:47:42 ID:1,8a:32:b3:dc:47:42 Lease:0x63d5eaca}
	I0127 20:10:31.686749   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:ca:3a:99:12:85:7a ID:1,ca:3a:99:12:85:7a Lease:0x63d5ea6e}
	I0127 20:10:31.686757   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:a6:93:12:50:62:df ID:1,a6:93:12:50:62:df Lease:0x63d5e969}
	I0127 20:10:31.686764   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:86:ae:74:13:81:48 ID:1,86:ae:74:13:81:48 Lease:0x63d497de}
	I0127 20:10:31.686772   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4e:8b:35:2e:2a:1c ID:1,4e:8b:35:2e:2a:1c Lease:0x63d5e860}
	I0127 20:10:32.632370   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0127 20:10:32.632388   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0127 20:10:32.632395   10821 main.go:141] libmachine: (auto-035000) DBG | 2023/01/27 20:10:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0127 20:10:33.687454   10821 main.go:141] libmachine: (auto-035000) DBG | Attempt 3
	I0127 20:10:33.687468   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:33.687604   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:33.688335   10821 main.go:141] libmachine: (auto-035000) DBG | Searching for fa:b:55:1c:5b:fd in /var/db/dhcpd_leases ...
	I0127 20:10:33.688381   10821 main.go:141] libmachine: (auto-035000) DBG | Found 29 entries in /var/db/dhcpd_leases!
	I0127 20:10:33.688413   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:4e:ed:63:62:fc:74 ID:1,4e:ed:63:62:fc:74 Lease:0x63d4a034}
	I0127 20:10:33.688429   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:92:e8:e3:c4:f2:c8 ID:1,92:e8:e3:c4:f2:c8 Lease:0x63d4a00f}
	I0127 20:10:33.688456   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:42:0:b5:81:6b:25 ID:1,42:0:b5:81:6b:25 Lease:0x63d5f153}
	I0127 20:10:33.688468   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:e6:e7:10:11:60:d5 ID:1,e6:e7:10:11:60:d5 Lease:0x63d5f11f}
	I0127 20:10:33.688480   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:9a:fd:c9:ea:0:f4 ID:1,9a:fd:c9:ea:0:f4 Lease:0x63d5f0fa}
	I0127 20:10:33.688493   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:2:53:13:2c:ff:14 ID:1,2:53:13:2c:ff:14 Lease:0x63d5f010}
	I0127 20:10:33.688509   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:c6:3e:49:a7:8f:45 ID:1,c6:3e:49:a7:8f:45 Lease:0x63d49e7a}
	I0127 20:10:33.688520   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:6a:fe:f6:ef:e1:6e ID:1,6a:fe:f6:ef:e1:6e Lease:0x63d5efcb}
	I0127 20:10:33.688535   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:6a:2a:9:fd:ff:ea ID:1,6a:2a:9:fd:ff:ea Lease:0x63d5efaa}
	I0127 20:10:33.688545   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:8e:af:39:6e:2d:7e ID:1,8e:af:39:6e:2d:7e Lease:0x63d5ef9f}
	I0127 20:10:33.688552   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:26:9d:91:15:d7:c0 ID:1,26:9d:91:15:d7:c0 Lease:0x63d49e1e}
	I0127 20:10:33.688561   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:b2:e4:7a:80:49:eb ID:1,b2:e4:7a:80:49:eb Lease:0x63d5ef63}
	I0127 20:10:33.688575   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:82:e5:85:4a:11:dc ID:1,82:e5:85:4a:11:dc Lease:0x63d5ef18}
	I0127 20:10:33.688584   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:5e:9f:11:8c:b2:80 ID:1,5e:9f:11:8c:b2:80 Lease:0x63d5eea6}
	I0127 20:10:33.688595   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:6e:a5:9:d2:3e:da ID:1,6e:a5:9:d2:3e:da Lease:0x63d5ee59}
	I0127 20:10:33.688603   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:6a:76:72:7a:17:3c ID:1,6a:76:72:7a:17:3c Lease:0x63d49c4f}
	I0127 20:10:33.688611   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:ba:21:ea:d1:67:55 ID:1,ba:21:ea:d1:67:55 Lease:0x63d49bc2}
	I0127 20:10:33.688619   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:e2:9a:64:74:42:a7 ID:1,e2:9a:64:74:42:a7 Lease:0x63d5ed8f}
	I0127 20:10:33.688627   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:c6:ea:3a:34:4:15 ID:1,c6:ea:3a:34:4:15 Lease:0x63d5ed5c}
	I0127 20:10:33.688642   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:de:3e:8e:16:8c:c ID:1,de:3e:8e:16:8c:c Lease:0x63d49a8a}
	I0127 20:10:33.688651   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:ea:c4:22:14:f6:79 ID:1,ea:c4:22:14:f6:79 Lease:0x63d49a75}
	I0127 20:10:33.688659   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:be:1f:dd:9:d4:b2 ID:1,be:1f:dd:9:d4:b2 Lease:0x63d49a4f}
	I0127 20:10:33.688666   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:9e:12:e:46:1e:dc ID:1,9e:12:e:46:1e:dc Lease:0x63d5eb82}
	I0127 20:10:33.688674   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:a:f3:2f:66:84:69 ID:1,a:f3:2f:66:84:69 Lease:0x63d5eb41}
	I0127 20:10:33.688689   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:8a:32:b3:dc:47:42 ID:1,8a:32:b3:dc:47:42 Lease:0x63d5eaca}
	I0127 20:10:33.688704   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:ca:3a:99:12:85:7a ID:1,ca:3a:99:12:85:7a Lease:0x63d5ea6e}
	I0127 20:10:33.688712   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:a6:93:12:50:62:df ID:1,a6:93:12:50:62:df Lease:0x63d5e969}
	I0127 20:10:33.688720   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:86:ae:74:13:81:48 ID:1,86:ae:74:13:81:48 Lease:0x63d497de}
	I0127 20:10:33.688730   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4e:8b:35:2e:2a:1c ID:1,4e:8b:35:2e:2a:1c Lease:0x63d5e860}
	I0127 20:10:35.688954   10821 main.go:141] libmachine: (auto-035000) DBG | Attempt 4
	I0127 20:10:35.688977   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:35.689018   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:35.689997   10821 main.go:141] libmachine: (auto-035000) DBG | Searching for fa:b:55:1c:5b:fd in /var/db/dhcpd_leases ...
	I0127 20:10:35.690065   10821 main.go:141] libmachine: (auto-035000) DBG | Found 30 entries in /var/db/dhcpd_leases!
	I0127 20:10:35.690082   10821 main.go:141] libmachine: (auto-035000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.31 HWAddress:fa:b:55:1c:5b:fd ID:1,fa:b:55:1c:5b:fd Lease:0x63d5f1bb}
	I0127 20:10:35.690121   10821 main.go:141] libmachine: (auto-035000) DBG | Found match: fa:b:55:1c:5b:fd
	I0127 20:10:35.690131   10821 main.go:141] libmachine: (auto-035000) DBG | IP: 192.168.64.31
	I0127 20:10:35.690145   10821 main.go:141] libmachine: (auto-035000) Calling .GetConfigRaw
	I0127 20:10:35.690702   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:35.690817   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:35.690905   10821 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0127 20:10:35.690914   10821 main.go:141] libmachine: (auto-035000) Calling .GetState
	I0127 20:10:35.691000   10821 main.go:141] libmachine: (auto-035000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:10:35.691063   10821 main.go:141] libmachine: (auto-035000) DBG | hyperkit pid from json: 10841
	I0127 20:10:35.691764   10821 main.go:141] libmachine: Detecting operating system of created instance...
	I0127 20:10:35.691772   10821 main.go:141] libmachine: Waiting for SSH to be available...
	I0127 20:10:35.691777   10821 main.go:141] libmachine: Getting to WaitForSSH function...
	I0127 20:10:35.691782   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:35.691856   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:35.691947   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:35.692028   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:35.692109   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:35.692212   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:35.692412   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:35.692420   10821 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0127 20:10:36.775225   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:10:36.775239   10821 main.go:141] libmachine: Detecting the provisioner...
	I0127 20:10:36.775245   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:36.775382   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:36.775487   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.775586   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.775678   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:36.775812   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:36.775937   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:36.775945   10821 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0127 20:10:36.855214   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g4751c28-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0127 20:10:36.855276   10821 main.go:141] libmachine: found compatible host: buildroot
	I0127 20:10:36.855283   10821 main.go:141] libmachine: Provisioning with buildroot...
	I0127 20:10:36.855292   10821 main.go:141] libmachine: (auto-035000) Calling .GetMachineName
	I0127 20:10:36.855427   10821 buildroot.go:166] provisioning hostname "auto-035000"
	I0127 20:10:36.855438   10821 main.go:141] libmachine: (auto-035000) Calling .GetMachineName
	I0127 20:10:36.855522   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:36.855606   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:36.855717   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.855790   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.855900   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:36.856029   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:36.856152   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:36.856161   10821 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-035000 && echo "auto-035000" | sudo tee /etc/hostname
	I0127 20:10:36.944051   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-035000
	
	I0127 20:10:36.944068   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:36.944198   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:36.944297   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.944387   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:36.944463   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:36.944596   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:36.944725   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:36.944737   10821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-035000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-035000/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-035000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:10:37.028181   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:10:37.028197   10821 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3235/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3235/.minikube}
	I0127 20:10:37.028206   10821 buildroot.go:174] setting up certificates
	I0127 20:10:37.028215   10821 provision.go:83] configureAuth start
	I0127 20:10:37.028224   10821 main.go:141] libmachine: (auto-035000) Calling .GetMachineName
	I0127 20:10:37.028360   10821 main.go:141] libmachine: (auto-035000) Calling .GetIP
	I0127 20:10:37.028443   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.028529   10821 provision.go:138] copyHostCerts
	I0127 20:10:37.028618   10821 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem, removing ...
	I0127 20:10:37.028627   10821 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem
	I0127 20:10:37.028787   10821 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem (1082 bytes)
	I0127 20:10:37.028998   10821 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem, removing ...
	I0127 20:10:37.029005   10821 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem
	I0127 20:10:37.029102   10821 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem (1123 bytes)
	I0127 20:10:37.029247   10821 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem, removing ...
	I0127 20:10:37.029253   10821 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem
	I0127 20:10:37.029405   10821 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem (1675 bytes)
	I0127 20:10:37.029520   10821 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem org=jenkins.auto-035000 san=[192.168.64.31 192.168.64.31 localhost 127.0.0.1 minikube auto-035000]
	I0127 20:10:37.124345   10821 provision.go:172] copyRemoteCerts
	I0127 20:10:37.124393   10821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:10:37.124409   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.124549   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.124649   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.124749   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.124851   10821 sshutil.go:53] new ssh client: &{IP:192.168.64.31 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/id_rsa Username:docker}
	I0127 20:10:37.172621   10821 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 20:10:37.188290   10821 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0127 20:10:37.204025   10821 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 20:10:37.219323   10821 provision.go:86] duration metric: configureAuth took 191.102039ms
	I0127 20:10:37.219333   10821 buildroot.go:189] setting minikube options for container-runtime
	I0127 20:10:37.219481   10821 config.go:180] Loaded profile config "auto-035000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:10:37.219496   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:37.219626   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.219702   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.219787   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.219858   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.219928   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.220026   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:37.220128   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:37.220136   10821 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:10:37.303095   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 20:10:37.303108   10821 buildroot.go:70] root file system type: tmpfs
	I0127 20:10:37.303238   10821 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:10:37.303257   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.303408   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.303502   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.303595   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.303710   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.303835   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:37.303954   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:37.304004   10821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:10:37.394212   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:10:37.394235   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.394377   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.394480   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.394558   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.394665   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.394799   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:37.394910   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:37.394922   10821 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:10:37.876305   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 20:10:37.876329   10821 main.go:141] libmachine: Checking connection to Docker...
	I0127 20:10:37.876338   10821 main.go:141] libmachine: (auto-035000) Calling .GetURL
	I0127 20:10:37.876469   10821 main.go:141] libmachine: Docker is up and running!
	I0127 20:10:37.876477   10821 main.go:141] libmachine: Reticulating splines...
	I0127 20:10:37.876481   10821 client.go:171] LocalClient.Create took 10.631931984s
	I0127 20:10:37.876494   10821 start.go:167] duration metric: libmachine.API.Create for "auto-035000" took 10.63196978s
	I0127 20:10:37.876502   10821 start.go:300] post-start starting for "auto-035000" (driver="hyperkit")
	I0127 20:10:37.876506   10821 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:10:37.876522   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:37.876679   10821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:10:37.876694   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.876778   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.876865   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.876949   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.877030   10821 sshutil.go:53] new ssh client: &{IP:192.168.64.31 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/id_rsa Username:docker}
	I0127 20:10:37.924465   10821 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:10:37.927187   10821 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 20:10:37.927200   10821 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/addons for local assets ...
	I0127 20:10:37.927287   10821 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/files for local assets ...
	I0127 20:10:37.927451   10821 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem -> 44422.pem in /etc/ssl/certs
	I0127 20:10:37.927634   10821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:10:37.934080   10821 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /etc/ssl/certs/44422.pem (1708 bytes)
	I0127 20:10:37.950170   10821 start.go:303] post-start completed in 73.662895ms
	I0127 20:10:37.950197   10821 main.go:141] libmachine: (auto-035000) Calling .GetConfigRaw
	I0127 20:10:37.950759   10821 main.go:141] libmachine: (auto-035000) Calling .GetIP
	I0127 20:10:37.950911   10821 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/auto-035000/config.json ...
	I0127 20:10:37.951193   10821 start.go:128] duration metric: createHost completed in 10.740796175s
	I0127 20:10:37.951213   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:37.951292   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:37.951373   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.951448   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:37.951527   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:37.951619   10821 main.go:141] libmachine: Using SSH client type: native
	I0127 20:10:37.951716   10821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.31 22 <nil> <nil>}
	I0127 20:10:37.951725   10821 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 20:10:38.030475   10821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674879037.992390733
	
	I0127 20:10:38.030492   10821 fix.go:207] guest clock: 1674879037.992390733
	I0127 20:10:38.030499   10821 fix.go:220] Guest: 2023-01-27 20:10:37.992390733 -0800 PST Remote: 2023-01-27 20:10:37.951203 -0800 PST m=+11.436873362 (delta=41.187733ms)
	I0127 20:10:38.030521   10821 fix.go:191] guest clock delta is within tolerance: 41.187733ms
	I0127 20:10:38.030526   10821 start.go:83] releasing machines lock for "auto-035000", held for 10.820215464s
	I0127 20:10:38.030544   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:38.030700   10821 main.go:141] libmachine: (auto-035000) Calling .GetIP
	I0127 20:10:38.030810   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:38.031148   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:38.031253   10821 main.go:141] libmachine: (auto-035000) Calling .DriverName
	I0127 20:10:38.031332   10821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 20:10:38.031357   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:38.031443   10821 ssh_runner.go:195] Run: cat /version.json
	I0127 20:10:38.031459   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHHostname
	I0127 20:10:38.031478   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:38.031607   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHPort
	I0127 20:10:38.031609   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:38.031724   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHKeyPath
	I0127 20:10:38.031738   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:38.031856   10821 main.go:141] libmachine: (auto-035000) Calling .GetSSHUsername
	I0127 20:10:38.031868   10821 sshutil.go:53] new ssh client: &{IP:192.168.64.31 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/id_rsa Username:docker}
	I0127 20:10:38.031951   10821 sshutil.go:53] new ssh client: &{IP:192.168.64.31 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/auto-035000/id_rsa Username:docker}
	W0127 20:10:38.111825   10821 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0127 20:10:38.111896   10821 ssh_runner.go:195] Run: systemctl --version
	I0127 20:10:38.115598   10821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 20:10:38.118998   10821 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 20:10:38.119063   10821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 20:10:38.124890   10821 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0127 20:10:38.135822   10821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 20:10:38.146157   10821 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 20:10:38.146171   10821 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:10:38.146262   10821 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:10:38.163727   10821 docker.go:630] Got preloaded images: 
	I0127 20:10:38.163740   10821 docker.go:636] registry.k8s.io/kube-apiserver:v1.26.1 wasn't preloaded
	I0127 20:10:38.163790   10821 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 20:10:38.170075   10821 ssh_runner.go:195] Run: which lz4
	I0127 20:10:38.172671   10821 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 20:10:38.175277   10821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 20:10:38.175300   10821 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (416334111 bytes)
	I0127 20:10:39.359249   10821 docker.go:594] Took 1.186662 seconds to copy over tarball
	I0127 20:10:39.359314   10821 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 20:10:43.515602   10821 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.156368776s)
	I0127 20:10:43.515622   10821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 20:10:43.542556   10821 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 20:10:43.549026   10821 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
	I0127 20:10:43.560486   10821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:10:43.650536   10821 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:10:44.971428   10821 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.320897878s)
	I0127 20:10:44.971460   10821 start.go:472] detecting cgroup driver to use...
	I0127 20:10:44.971568   10821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:10:44.983551   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0127 20:10:44.990508   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:10:44.997846   10821 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:10:44.997904   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:10:45.005135   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:10:45.012600   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:10:45.020003   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:10:45.028220   10821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:10:45.035701   10821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:10:45.043057   10821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:10:45.049649   10821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:10:45.056346   10821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:10:45.145699   10821 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:10:45.156873   10821 start.go:472] detecting cgroup driver to use...
	I0127 20:10:45.156944   10821 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:10:45.170725   10821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:10:45.180954   10821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 20:10:45.198126   10821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:10:45.207030   10821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:10:45.216057   10821 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 20:10:45.243997   10821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:10:45.253440   10821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:10:45.265659   10821 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:10:45.362697   10821 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:10:45.458978   10821 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:10:45.459001   10821 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:10:45.470469   10821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:10:45.562029   10821 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:10:46.783339   10821 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.22131753s)
	I0127 20:10:46.783407   10821 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:10:46.868275   10821 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 20:10:46.966622   10821 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 20:10:47.064053   10821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:10:47.152069   10821 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 20:10:47.193466   10821 out.go:177] 
	W0127 20:10:47.213405   10821 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W0127 20:10:47.213447   10821 out.go:239] * 
	* 
	W0127 20:10:47.214827   10821 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:10:47.298195   10821 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/auto/Start (20.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:19:43.054488    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.1: exit status 90 (1m16.631372764s)

                                                
                                                
-- stdout --
	* [no-preload-272000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting control plane node no-preload-272000 in cluster no-preload-272000
	* Restarting existing hyperkit VM for "no-preload-272000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:19:40.651878   13574 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:19:40.652119   13574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:19:40.652124   13574 out.go:309] Setting ErrFile to fd 2...
	I0127 20:19:40.652129   13574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:19:40.652246   13574 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:19:40.652723   13574 out.go:303] Setting JSON to false
	I0127 20:19:40.671221   13574 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4755,"bootTime":1674874825,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 20:19:40.671316   13574 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:19:40.693439   13574 out.go:177] * [no-preload-272000] minikube v1.28.0 on Darwin 13.2
	I0127 20:19:40.736073   13574 notify.go:220] Checking for updates...
	I0127 20:19:40.758129   13574 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:19:40.779953   13574 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:19:40.801104   13574 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:19:40.822199   13574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:19:40.844148   13574 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 20:19:40.866271   13574 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:19:40.888662   13574 config.go:180] Loaded profile config "no-preload-272000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:19:40.889380   13574 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:40.889466   13574 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:40.897201   13574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56210
	I0127 20:19:40.897560   13574 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:40.897984   13574 main.go:141] libmachine: Using API Version  1
	I0127 20:19:40.897994   13574 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:40.898200   13574 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:40.898305   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:40.898430   13574 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:19:40.898693   13574 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:40.898714   13574 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:40.905322   13574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56212
	I0127 20:19:40.905670   13574 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:40.905983   13574 main.go:141] libmachine: Using API Version  1
	I0127 20:19:40.905996   13574 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:40.906230   13574 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:40.906354   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:40.933889   13574 out.go:177] * Using the hyperkit driver based on existing profile
	I0127 20:19:40.955130   13574 start.go:296] selected driver: hyperkit
	I0127 20:19:40.955159   13574 start.go:840] validating driver "hyperkit" against &{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-272000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.41 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:19:40.955349   13574 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:19:40.959163   13574 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:40.959320   13574 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 20:19:40.966003   13574 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 20:19:40.969310   13574 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:40.969328   13574 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 20:19:40.969412   13574 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:19:40.969434   13574 cni.go:84] Creating CNI manager for ""
	I0127 20:19:40.969447   13574 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 20:19:40.969456   13574 start_flags.go:319] config:
	{Name:no-preload-272000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-272000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.41 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:19:40.969554   13574 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.012780   13574 out.go:177] * Starting control plane node no-preload-272000 in cluster no-preload-272000
	I0127 20:19:41.034250   13574 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:19:41.034521   13574 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/config.json ...
	I0127 20:19:41.034587   13574 cache.go:107] acquiring lock: {Name:mk174b6d1530e6e18362835dc9b8686f577480f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034612   13574 cache.go:107] acquiring lock: {Name:mk6979b08b94c64df32bf734a70e87dafa8904f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034649   13574 cache.go:107] acquiring lock: {Name:mk6b172fb982cddade49a5386236e53cc3aa7acf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034736   13574 cache.go:107] acquiring lock: {Name:mkc41747601bbe405bf9dc83de63cc74e8734cd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034784   13574 cache.go:107] acquiring lock: {Name:mk7c6de2f6ea5386c3aa01275f204777bbe06839 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034751   13574 cache.go:107] acquiring lock: {Name:mkb44f3fd0307b7ab3357b477940330984f129b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034866   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0127 20:19:41.034860   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0127 20:19:41.034900   13574 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 275.505µs
	I0127 20:19:41.034844   13574 cache.go:107] acquiring lock: {Name:mk8d690d1d203d5750f44824f52e8888c1423716 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034915   13574 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 335.08µs
	I0127 20:19:41.034950   13574 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0127 20:19:41.034957   13574 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0127 20:19:41.034950   13574 cache.go:107] acquiring lock: {Name:mk8cfa92d19880509b7bc8eac68bb9e9cd194010 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:41.034991   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 exists
	I0127 20:19:41.035005   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 exists
	I0127 20:19:41.035011   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 exists
	I0127 20:19:41.035043   13574 cache.go:96] cache image "registry.k8s.io/etcd:3.5.6-0" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0" took 265.327µs
	I0127 20:19:41.035042   13574 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.26.1" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1" took 312.217µs
	I0127 20:19:41.035065   13574 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.6-0 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 succeeded
	I0127 20:19:41.035070   13574 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.26.1 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 succeeded
	I0127 20:19:41.035057   13574 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.26.1" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1" took 469.987µs
	I0127 20:19:41.035095   13574 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.26.1 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 succeeded
	I0127 20:19:41.035089   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 exists
	I0127 20:19:41.035124   13574 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.26.1" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1" took 407.364µs
	I0127 20:19:41.035146   13574 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.26.1 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 succeeded
	I0127 20:19:41.035144   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 exists
	I0127 20:19:41.035170   13574 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.26.1" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1" took 387.016µs
	I0127 20:19:41.035185   13574 cache.go:115] /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0127 20:19:41.035188   13574 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.26.1 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 succeeded
	I0127 20:19:41.035207   13574 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 355.018µs
	I0127 20:19:41.035218   13574 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0127 20:19:41.035231   13574 cache.go:87] Successfully saved all images to host disk.
	I0127 20:19:41.035508   13574 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:19:41.035565   13574 start.go:364] acquiring machines lock for no-preload-272000: {Name:mk69c04a34b14d26e3f74e414bcb566a33d5b215 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 20:19:41.035664   13574 start.go:368] acquired machines lock for "no-preload-272000" in 81.944µs
	I0127 20:19:41.035693   13574 start.go:96] Skipping create...Using existing machine configuration
	I0127 20:19:41.035707   13574 fix.go:55] fixHost starting: 
	I0127 20:19:41.036147   13574 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:41.036183   13574 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:41.043759   13574 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56214
	I0127 20:19:41.044120   13574 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:41.044513   13574 main.go:141] libmachine: Using API Version  1
	I0127 20:19:41.044530   13574 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:41.044757   13574 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:41.044865   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:41.044961   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetState
	I0127 20:19:41.045052   13574 main.go:141] libmachine: (no-preload-272000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:41.045121   13574 main.go:141] libmachine: (no-preload-272000) DBG | hyperkit pid from json: 13252
	I0127 20:19:41.045975   13574 main.go:141] libmachine: (no-preload-272000) DBG | hyperkit pid 13252 missing from process table
	I0127 20:19:41.046008   13574 fix.go:103] recreateIfNeeded on no-preload-272000: state=Stopped err=<nil>
	I0127 20:19:41.046024   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	W0127 20:19:41.046113   13574 fix.go:129] unexpected machine state, will restart: <nil>
	I0127 20:19:41.066936   13574 out.go:177] * Restarting existing hyperkit VM for "no-preload-272000" ...
	I0127 20:19:41.087907   13574 main.go:141] libmachine: (no-preload-272000) Calling .Start
	I0127 20:19:41.088126   13574 main.go:141] libmachine: (no-preload-272000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:41.088151   13574 main.go:141] libmachine: (no-preload-272000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/hyperkit.pid
	I0127 20:19:41.088993   13574 main.go:141] libmachine: (no-preload-272000) DBG | hyperkit pid 13252 missing from process table
	I0127 20:19:41.089002   13574 main.go:141] libmachine: (no-preload-272000) DBG | pid 13252 is in state "Stopped"
	I0127 20:19:41.089017   13574 main.go:141] libmachine: (no-preload-272000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/hyperkit.pid...
	I0127 20:19:41.089155   13574 main.go:141] libmachine: (no-preload-272000) DBG | Using UUID b65cf4ac-9ec2-11ed-a719-149d997fca88
	I0127 20:19:41.118221   13574 main.go:141] libmachine: (no-preload-272000) DBG | Generated MAC 96:1e:f3:16:73:71
	I0127 20:19:41.118248   13574 main.go:141] libmachine: (no-preload-272000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-272000
	I0127 20:19:41.118380   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b65cf4ac-9ec2-11ed-a719-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c7b90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.
Process)(nil)}
	I0127 20:19:41.118437   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b65cf4ac-9ec2-11ed-a719-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c7b90)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.
Process)(nil)}
	I0127 20:19:41.118494   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b65cf4ac-9ec2-11ed-a719-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/no-preload-272000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/bzimage,/Users/jenkins/minikube-integrat
ion/15565-3235/.minikube/machines/no-preload-272000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-272000"}
	I0127 20:19:41.118528   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b65cf4ac-9ec2-11ed-a719-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/no-preload-272000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/bzimage,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/initrd,ear
lyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=no-preload-272000"
	I0127 20:19:41.118582   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0127 20:19:41.119669   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 DEBUG: hyperkit: Pid is 13586
	I0127 20:19:41.120015   13574 main.go:141] libmachine: (no-preload-272000) DBG | Attempt 0
	I0127 20:19:41.120023   13574 main.go:141] libmachine: (no-preload-272000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:41.120120   13574 main.go:141] libmachine: (no-preload-272000) DBG | hyperkit pid from json: 13586
	I0127 20:19:41.121919   13574 main.go:141] libmachine: (no-preload-272000) DBG | Searching for 96:1e:f3:16:73:71 in /var/db/dhcpd_leases ...
	I0127 20:19:41.122007   13574 main.go:141] libmachine: (no-preload-272000) DBG | Found 40 entries in /var/db/dhcpd_leases!
	I0127 20:19:41.122023   13574 main.go:141] libmachine: (no-preload-272000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:96:1e:f3:16:73:71 ID:1,96:1e:f3:16:73:71 Lease:0x63d5f370}
	I0127 20:19:41.122032   13574 main.go:141] libmachine: (no-preload-272000) DBG | Found match: 96:1e:f3:16:73:71
	I0127 20:19:41.122043   13574 main.go:141] libmachine: (no-preload-272000) DBG | IP: 192.168.64.41
	I0127 20:19:41.122116   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetConfigRaw
	I0127 20:19:41.122673   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetIP
	I0127 20:19:41.122836   13574 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/config.json ...
	I0127 20:19:41.123162   13574 machine.go:88] provisioning docker machine ...
	I0127 20:19:41.123172   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:41.123301   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetMachineName
	I0127 20:19:41.123414   13574 buildroot.go:166] provisioning hostname "no-preload-272000"
	I0127 20:19:41.123424   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetMachineName
	I0127 20:19:41.123555   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:41.123649   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:41.123755   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:41.123855   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:41.123946   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:41.124099   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:41.124307   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:41.124320   13574 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-272000 && echo "no-preload-272000" | sudo tee /etc/hostname
	I0127 20:19:41.126448   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0127 20:19:41.134059   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0127 20:19:41.134845   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:19:41.134858   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:19:41.134870   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:19:41.134883   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:19:41.494475   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0127 20:19:41.494491   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0127 20:19:41.598480   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:19:41.598498   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:19:41.598512   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:19:41.598577   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:19:41.599411   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0127 20:19:41.599424   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:41 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0127 20:19:46.060078   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:46 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0127 20:19:46.060116   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:46 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0127 20:19:46.060132   13574 main.go:141] libmachine: (no-preload-272000) DBG | 2023/01/27 20:19:46 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0127 20:19:54.314652   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-272000
	
	I0127 20:19:54.314673   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.314808   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:54.314907   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.315004   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.315093   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:54.315264   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:54.315399   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:54.315411   13574 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-272000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-272000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-272000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:19:54.384673   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:19:54.384689   13574 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3235/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3235/.minikube}
	I0127 20:19:54.384704   13574 buildroot.go:174] setting up certificates
	I0127 20:19:54.384714   13574 provision.go:83] configureAuth start
	I0127 20:19:54.384721   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetMachineName
	I0127 20:19:54.384855   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetIP
	I0127 20:19:54.384953   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.385040   13574 provision.go:138] copyHostCerts
	I0127 20:19:54.385125   13574 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem, removing ...
	I0127 20:19:54.385134   13574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem
	I0127 20:19:54.385256   13574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem (1675 bytes)
	I0127 20:19:54.385488   13574 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem, removing ...
	I0127 20:19:54.385494   13574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem
	I0127 20:19:54.385557   13574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem (1082 bytes)
	I0127 20:19:54.385712   13574 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem, removing ...
	I0127 20:19:54.385718   13574 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem
	I0127 20:19:54.385777   13574 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem (1123 bytes)
	I0127 20:19:54.385903   13574 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem org=jenkins.no-preload-272000 san=[192.168.64.41 192.168.64.41 localhost 127.0.0.1 minikube no-preload-272000]
	I0127 20:19:54.448389   13574 provision.go:172] copyRemoteCerts
	I0127 20:19:54.448440   13574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:19:54.448453   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.448570   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:54.448650   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.448751   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:54.448857   13574 sshutil.go:53] new ssh client: &{IP:192.168.64.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/id_rsa Username:docker}
	I0127 20:19:54.487040   13574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 20:19:54.502118   13574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 20:19:54.517240   13574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 20:19:54.532219   13574 provision.go:86] duration metric: configureAuth took 147.490013ms
	I0127 20:19:54.532229   13574 buildroot.go:189] setting minikube options for container-runtime
	I0127 20:19:54.532382   13574 config.go:180] Loaded profile config "no-preload-272000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:19:54.532394   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:54.532517   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.532601   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:54.532688   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.532775   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.532847   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:54.532960   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:54.533065   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:54.533073   13574 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:19:54.598370   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 20:19:54.598382   13574 buildroot.go:70] root file system type: tmpfs
	I0127 20:19:54.598513   13574 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:19:54.598529   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.598658   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:54.598761   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.598853   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.598931   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:54.599059   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:54.599171   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:54.599216   13574 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:19:54.675973   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:19:54.675994   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:54.676143   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:54.676245   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.676358   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:54.676449   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:54.676578   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:54.676684   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:54.676698   13574 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:19:55.214708   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 20:19:55.214721   13574 machine.go:91] provisioned docker machine in 14.09082043s
	I0127 20:19:55.214732   13574 start.go:300] post-start starting for "no-preload-272000" (driver="hyperkit")
	I0127 20:19:55.214738   13574 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:19:55.214748   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:55.214937   13574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:19:55.214952   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:55.215043   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:55.215131   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:55.215218   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:55.215303   13574 sshutil.go:53] new ssh client: &{IP:192.168.64.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/id_rsa Username:docker}
	I0127 20:19:55.256698   13574 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:19:55.259266   13574 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 20:19:55.259278   13574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/addons for local assets ...
	I0127 20:19:55.259368   13574 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/files for local assets ...
	I0127 20:19:55.259537   13574 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem -> 44422.pem in /etc/ssl/certs
	I0127 20:19:55.259722   13574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:19:55.265443   13574 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /etc/ssl/certs/44422.pem (1708 bytes)
	I0127 20:19:55.281252   13574 start.go:303] post-start completed in 66.510907ms
	I0127 20:19:55.281267   13574 fix.go:57] fixHost completed within 14.244824641s
	I0127 20:19:55.281280   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:55.281416   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:55.281505   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:55.281595   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:55.281690   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:55.281825   13574 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:55.281939   13574 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.41 22 <nil> <nil>}
	I0127 20:19:55.281947   13574 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 20:19:55.346523   13574 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674879595.534611557
	
	I0127 20:19:55.346533   13574 fix.go:207] guest clock: 1674879595.534611557
	I0127 20:19:55.346538   13574 fix.go:220] Guest: 2023-01-27 20:19:55.534611557 -0800 PST Remote: 2023-01-27 20:19:55.28127 -0800 PST m=+14.677460358 (delta=253.341557ms)
	I0127 20:19:55.346559   13574 fix.go:191] guest clock delta is within tolerance: 253.341557ms
	I0127 20:19:55.346564   13574 start.go:83] releasing machines lock for "no-preload-272000", held for 14.310148981s
	I0127 20:19:55.346582   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:55.346743   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetIP
	I0127 20:19:55.346851   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:55.347166   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:55.347298   13574 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:19:55.347374   13574 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0127 20:19:55.347400   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:55.347451   13574 ssh_runner.go:195] Run: cat /version.json
	I0127 20:19:55.347470   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:19:55.347520   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:55.347588   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:19:55.347619   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:55.347724   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:55.347744   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:19:55.347807   13574 sshutil.go:53] new ssh client: &{IP:192.168.64.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/id_rsa Username:docker}
	I0127 20:19:55.347827   13574 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:19:55.347924   13574 sshutil.go:53] new ssh client: &{IP:192.168.64.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/id_rsa Username:docker}
	W0127 20:19:55.383093   13574 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0127 20:19:55.383162   13574 ssh_runner.go:195] Run: systemctl --version
	I0127 20:19:55.428724   13574 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 20:19:55.433582   13574 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 20:19:55.433671   13574 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 20:19:55.440969   13574 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0127 20:19:55.455508   13574 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 20:19:55.469410   13574 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 20:19:55.469425   13574 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 20:19:55.469435   13574 start.go:472] detecting cgroup driver to use...
	I0127 20:19:55.469544   13574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:19:55.486921   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0127 20:19:55.494163   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:19:55.501386   13574 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:19:55.501449   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:19:55.508830   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:19:55.515676   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:19:55.522654   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:19:55.529686   13574 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:19:55.536910   13574 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:19:55.543922   13574 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:19:55.550299   13574 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:19:55.556707   13574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:19:55.641310   13574 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:19:55.654354   13574 start.go:472] detecting cgroup driver to use...
	I0127 20:19:55.674567   13574 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:19:55.684418   13574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:19:55.693494   13574 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 20:19:55.708316   13574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:19:55.717889   13574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:19:55.726248   13574 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 20:19:55.750568   13574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:19:55.763478   13574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:19:55.778370   13574 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:19:55.870107   13574 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:19:55.949187   13574 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:19:55.949206   13574 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:19:55.961448   13574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:19:56.040241   13574 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:20:57.072491   13574 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.032911028s)
	I0127 20:20:57.096981   13574 out.go:177] 
	W0127 20:20:57.117073   13574 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0127 20:20:57.117102   13574 out.go:239] * 
	* 
	W0127 20:20:57.118381   13574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:20:57.203826   13574 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.1": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (146.307256ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:20:57.372342   13672 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (76.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (86.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0127 20:19:48.679931    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:19:58.159351    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:19:59.829519    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:19:59.835236    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:19:59.846408    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:19:59.868269    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:19:59.909494    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:19:59.989943    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:00.150318    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:00.472125    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:01.112789    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:02.393532    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:04.953987    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:10.075066    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:11.409835    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:20:11.643921    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 20:20:20.316318    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:24.017277    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:20:35.607544    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.613929    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.626181    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.647295    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.688083    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.768342    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:35.929345    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:36.251769    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:36.892330    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:38.174529    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:40.736722    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:40.798207    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:20:45.857689    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:20:56.098069    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: exit status 90 (1m25.96182586s)

                                                
                                                
-- stdout --
	* [old-k8s-version-159000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the hyperkit driver based on existing profile
	* Starting control plane node old-k8s-version-159000 in cluster old-k8s-version-159000
	* Restarting existing hyperkit VM for "old-k8s-version-159000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:19:45.871025   13598 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:19:45.871240   13598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:19:45.871245   13598 out.go:309] Setting ErrFile to fd 2...
	I0127 20:19:45.871249   13598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:19:45.871376   13598 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:19:45.871855   13598 out.go:303] Setting JSON to false
	I0127 20:19:45.890570   13598 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4760,"bootTime":1674874825,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 20:19:45.890668   13598 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 20:19:45.912341   13598 out.go:177] * [old-k8s-version-159000] minikube v1.28.0 on Darwin 13.2
	I0127 20:19:45.955274   13598 notify.go:220] Checking for updates...
	I0127 20:19:45.976336   13598 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 20:19:46.023753   13598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 20:19:46.065949   13598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 20:19:46.086821   13598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 20:19:46.108242   13598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 20:19:46.130241   13598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 20:19:46.152665   13598 config.go:180] Loaded profile config "old-k8s-version-159000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:19:46.153357   13598 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:46.153433   13598 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:46.161164   13598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56229
	I0127 20:19:46.161588   13598 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:46.162009   13598 main.go:141] libmachine: Using API Version  1
	I0127 20:19:46.162020   13598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:46.162251   13598 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:46.162366   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:19:46.183958   13598 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0127 20:19:46.204955   13598 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 20:19:46.205532   13598 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:46.205578   13598 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:46.213672   13598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56231
	I0127 20:19:46.214054   13598 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:46.214401   13598 main.go:141] libmachine: Using API Version  1
	I0127 20:19:46.214411   13598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:46.214640   13598 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:46.214765   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:19:46.241957   13598 out.go:177] * Using the hyperkit driver based on existing profile
	I0127 20:19:46.284004   13598 start.go:296] selected driver: hyperkit
	I0127 20:19:46.284039   13598 start.go:840] validating driver "hyperkit" against &{Name:old-k8s-version-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-159000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.40 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:19:46.284236   13598 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 20:19:46.287903   13598 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:46.288003   13598 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 20:19:46.294656   13598 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 20:19:46.298015   13598 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:46.298032   13598 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 20:19:46.298122   13598 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 20:19:46.298143   13598 cni.go:84] Creating CNI manager for ""
	I0127 20:19:46.298157   13598 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 20:19:46.298168   13598 start_flags.go:319] config:
	{Name:old-k8s-version-159000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-159000 Namespa
ce:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.40 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 20:19:46.298274   13598 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 20:19:46.319813   13598 out.go:177] * Starting control plane node old-k8s-version-159000 in cluster old-k8s-version-159000
	I0127 20:19:46.341065   13598 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:19:46.341151   13598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 20:19:46.341189   13598 cache.go:57] Caching tarball of preloaded images
	I0127 20:19:46.341368   13598 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 20:19:46.341386   13598 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 20:19:46.341553   13598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/config.json ...
	I0127 20:19:46.342318   13598 cache.go:193] Successfully downloaded all kic artifacts
	I0127 20:19:46.342389   13598 start.go:364] acquiring machines lock for old-k8s-version-159000: {Name:mk69c04a34b14d26e3f74e414bcb566a33d5b215 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 20:19:55.346620   13598 start.go:368] acquired machines lock for "old-k8s-version-159000" in 9.003837446s
	I0127 20:19:55.346675   13598 start.go:96] Skipping create...Using existing machine configuration
	I0127 20:19:55.346688   13598 fix.go:55] fixHost starting: 
	I0127 20:19:55.347021   13598 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:19:55.347045   13598 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:19:55.354375   13598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56249
	I0127 20:19:55.354785   13598 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:19:55.355155   13598 main.go:141] libmachine: Using API Version  1
	I0127 20:19:55.355168   13598 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:19:55.355390   13598 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:19:55.355497   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:19:55.355748   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetState
	I0127 20:19:55.355844   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:55.355923   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | hyperkit pid from json: 12977
	I0127 20:19:55.356787   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | hyperkit pid 12977 missing from process table
	I0127 20:19:55.356856   13598 fix.go:103] recreateIfNeeded on old-k8s-version-159000: state=Stopped err=<nil>
	I0127 20:19:55.356883   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	W0127 20:19:55.357001   13598 fix.go:129] unexpected machine state, will restart: <nil>
	I0127 20:19:55.380427   13598 out.go:177] * Restarting existing hyperkit VM for "old-k8s-version-159000" ...
	I0127 20:19:55.401575   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .Start
	I0127 20:19:55.401906   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:55.402002   13598 main.go:141] libmachine: (old-k8s-version-159000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/hyperkit.pid
	I0127 20:19:55.403646   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | hyperkit pid 12977 missing from process table
	I0127 20:19:55.403666   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | pid 12977 is in state "Stopped"
	I0127 20:19:55.403721   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/hyperkit.pid...
	I0127 20:19:55.422920   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Using UUID a160b142-9ec2-11ed-84b5-149d997fca88
	I0127 20:19:55.450859   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Generated MAC 2e:af:cf:a9:c:ca
	I0127 20:19:55.450877   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=old-k8s-version-159000
	I0127 20:19:55.451073   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a160b142-9ec2-11ed-84b5-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cade0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I0127 20:19:55.451118   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a160b142-9ec2-11ed-84b5-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003cade0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil),
CmdLine:"", process:(*os.Process)(nil)}
	I0127 20:19:55.451187   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a160b142-9ec2-11ed-84b5-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/old-k8s-version-159000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/
bzimage,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=old-k8s-version-159000"}
	I0127 20:19:55.451233   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a160b142-9ec2-11ed-84b5-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/old-k8s-version-159000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/tty,log=/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/bzimage,/Users/jenkins/minikube-integration/15565-3235/.miniku
be/machines/old-k8s-version-159000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=old-k8s-version-159000"
	I0127 20:19:55.451257   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0127 20:19:55.452552   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 DEBUG: hyperkit: Pid is 13617
	I0127 20:19:55.452916   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Attempt 0
	I0127 20:19:55.452952   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:19:55.453030   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | hyperkit pid from json: 13617
	I0127 20:19:55.454828   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Searching for 2e:af:cf:a9:c:ca in /var/db/dhcpd_leases ...
	I0127 20:19:55.454936   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Found 40 entries in /var/db/dhcpd_leases!
	I0127 20:19:55.454959   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.41 HWAddress:96:1e:f3:16:73:71 ID:1,96:1e:f3:16:73:71 Lease:0x63d5f3e4}
	I0127 20:19:55.455002   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.40 HWAddress:2e:af:cf:a9:c:ca ID:1,2e:af:cf:a9:c:ca Lease:0x63d5f34c}
	I0127 20:19:55.455020   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | Found match: 2e:af:cf:a9:c:ca
	I0127 20:19:55.455043   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetConfigRaw
	I0127 20:19:55.455043   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | IP: 192.168.64.40
	I0127 20:19:55.455657   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetIP
	I0127 20:19:55.455834   13598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/config.json ...
	I0127 20:19:55.456274   13598 machine.go:88] provisioning docker machine ...
	I0127 20:19:55.456286   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:19:55.456426   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetMachineName
	I0127 20:19:55.456539   13598 buildroot.go:166] provisioning hostname "old-k8s-version-159000"
	I0127 20:19:55.456549   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetMachineName
	I0127 20:19:55.456639   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:19:55.456741   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:19:55.456838   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:19:55.456945   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:19:55.457038   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:19:55.457148   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:19:55.457315   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:19:55.457326   13598 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-159000 && echo "old-k8s-version-159000" | sudo tee /etc/hostname
	I0127 20:19:55.459650   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0127 20:19:55.467309   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0127 20:19:55.468360   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:19:55.468377   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:19:55.468404   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:19:55.468418   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:19:55.831278   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0127 20:19:55.831298   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0127 20:19:55.935469   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0127 20:19:55.935499   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0127 20:19:55.935511   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0127 20:19:55.935526   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0127 20:19:55.936345   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0127 20:19:55.936358   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:19:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0127 20:20:00.387048   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:20:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0127 20:20:00.387082   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:20:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0127 20:20:00.387100   13598 main.go:141] libmachine: (old-k8s-version-159000) DBG | 2023/01/27 20:20:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0127 20:20:08.651356   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-159000
	
	I0127 20:20:08.651373   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:08.651509   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:08.651613   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.651707   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.651835   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:08.651979   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:20:08.652143   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:20:08.652155   13598 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-159000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-159000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-159000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 20:20:08.732720   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 20:20:08.732739   13598 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3235/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3235/.minikube}
	I0127 20:20:08.732754   13598 buildroot.go:174] setting up certificates
	I0127 20:20:08.732764   13598 provision.go:83] configureAuth start
	I0127 20:20:08.732772   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetMachineName
	I0127 20:20:08.732900   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetIP
	I0127 20:20:08.733013   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:08.733122   13598 provision.go:138] copyHostCerts
	I0127 20:20:08.733217   13598 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem, removing ...
	I0127 20:20:08.733226   13598 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem
	I0127 20:20:08.733347   13598 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/cert.pem (1123 bytes)
	I0127 20:20:08.733543   13598 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem, removing ...
	I0127 20:20:08.733549   13598 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem
	I0127 20:20:08.733607   13598 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/key.pem (1675 bytes)
	I0127 20:20:08.733741   13598 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem, removing ...
	I0127 20:20:08.733746   13598 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem
	I0127 20:20:08.733800   13598 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3235/.minikube/ca.pem (1082 bytes)
	I0127 20:20:08.733908   13598 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-159000 san=[192.168.64.40 192.168.64.40 localhost 127.0.0.1 minikube old-k8s-version-159000]
	I0127 20:20:08.822428   13598 provision.go:172] copyRemoteCerts
	I0127 20:20:08.822482   13598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 20:20:08.822495   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:08.822619   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:08.822711   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.822805   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:08.822895   13598 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/id_rsa Username:docker}
	I0127 20:20:08.867068   13598 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0127 20:20:08.882243   13598 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0127 20:20:08.897241   13598 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 20:20:08.912261   13598 provision.go:86] duration metric: configureAuth took 179.485478ms
	I0127 20:20:08.912273   13598 buildroot.go:189] setting minikube options for container-runtime
	I0127 20:20:08.912417   13598 config.go:180] Loaded profile config "old-k8s-version-159000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:20:08.912432   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:08.912566   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:08.912660   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:08.912750   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.912842   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.912913   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:08.913011   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:20:08.913123   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:20:08.913132   13598 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 20:20:08.992124   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 20:20:08.992136   13598 buildroot.go:70] root file system type: tmpfs
	I0127 20:20:08.992253   13598 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 20:20:08.992276   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:08.992406   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:08.992502   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.992615   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:08.992717   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:08.992847   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:20:08.992956   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:20:08.993006   13598 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 20:20:09.077920   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 20:20:09.077950   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:09.078083   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:09.078172   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.078266   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.078365   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:09.078501   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:20:09.078621   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:20:09.078637   13598 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 20:20:09.616021   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 20:20:09.616044   13598 machine.go:91] provisioned docker machine in 14.159653527s
	I0127 20:20:09.616059   13598 start.go:300] post-start starting for "old-k8s-version-159000" (driver="hyperkit")
	I0127 20:20:09.616066   13598 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 20:20:09.616078   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:09.616260   13598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 20:20:09.616272   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:09.616368   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:09.616464   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.616561   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:09.616661   13598 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/id_rsa Username:docker}
	I0127 20:20:09.661547   13598 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 20:20:09.664186   13598 info.go:137] Remote host: Buildroot 2021.02.12
	I0127 20:20:09.664200   13598 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/addons for local assets ...
	I0127 20:20:09.664283   13598 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3235/.minikube/files for local assets ...
	I0127 20:20:09.664458   13598 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem -> 44422.pem in /etc/ssl/certs
	I0127 20:20:09.664610   13598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 20:20:09.670061   13598 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/ssl/certs/44422.pem --> /etc/ssl/certs/44422.pem (1708 bytes)
	I0127 20:20:09.685746   13598 start.go:303] post-start completed in 69.672621ms
	I0127 20:20:09.685761   13598 fix.go:57] fixHost completed within 14.338970101s
	I0127 20:20:09.685789   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:09.685920   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:09.686009   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.686101   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.686191   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:09.686309   13598 main.go:141] libmachine: Using SSH client type: native
	I0127 20:20:09.686417   13598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 192.168.64.40 22 <nil> <nil>}
	I0127 20:20:09.686428   13598 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 20:20:09.763160   13598 main.go:141] libmachine: SSH cmd err, output: <nil>: 1674879609.952460228
	
	I0127 20:20:09.763177   13598 fix.go:207] guest clock: 1674879609.952460228
	I0127 20:20:09.763182   13598 fix.go:220] Guest: 2023-01-27 20:20:09.952460228 -0800 PST Remote: 2023-01-27 20:20:09.685764 -0800 PST m=+23.863158941 (delta=266.696228ms)
	I0127 20:20:09.763202   13598 fix.go:191] guest clock delta is within tolerance: 266.696228ms
	I0127 20:20:09.763207   13598 start.go:83] releasing machines lock for "old-k8s-version-159000", held for 14.416468451s
	I0127 20:20:09.763229   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:09.763355   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetIP
	I0127 20:20:09.763452   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:09.763791   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:09.763907   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:20:09.763983   13598 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0127 20:20:09.764013   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:09.764053   13598 ssh_runner.go:195] Run: cat /version.json
	I0127 20:20:09.764066   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:20:09.764113   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:09.764198   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.764215   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:20:09.764293   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:20:09.764315   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:09.764380   13598 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/id_rsa Username:docker}
	I0127 20:20:09.764397   13598 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:20:09.764477   13598 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/id_rsa Username:docker}
	W0127 20:20:09.805972   13598 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0-1674856271-15565 -> Actual minikube version: v1.28.0
	I0127 20:20:09.806032   13598 ssh_runner.go:195] Run: systemctl --version
	I0127 20:20:09.984615   13598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 20:20:09.988879   13598 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 20:20:09.988926   13598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0127 20:20:09.996281   13598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0127 20:20:10.008553   13598 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 20:20:10.008566   13598 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 20:20:10.008650   13598 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 20:20:10.027747   13598 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0127 20:20:10.027761   13598 docker.go:560] Images already preloaded, skipping extraction
	I0127 20:20:10.027767   13598 start.go:472] detecting cgroup driver to use...
	I0127 20:20:10.027859   13598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:20:10.040340   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0127 20:20:10.047522   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 20:20:10.054341   13598 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 20:20:10.054387   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 20:20:10.061445   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:20:10.068297   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 20:20:10.075173   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 20:20:10.082194   13598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 20:20:10.089239   13598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 20:20:10.096160   13598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 20:20:10.102366   13598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 20:20:10.108501   13598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:20:10.191554   13598 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 20:20:10.204686   13598 start.go:472] detecting cgroup driver to use...
	I0127 20:20:10.204759   13598 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 20:20:10.229127   13598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:20:10.238554   13598 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 20:20:10.261715   13598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 20:20:10.275845   13598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:20:10.286954   13598 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 20:20:10.310450   13598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 20:20:10.320780   13598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 20:20:10.337126   13598 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 20:20:10.420169   13598 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 20:20:10.510093   13598 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 20:20:10.510114   13598 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0127 20:20:10.521457   13598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 20:20:10.614064   13598 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 20:21:11.645895   13598 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.032889695s)
	I0127 20:21:11.667720   13598 out.go:177] 
	W0127 20:21:11.689502   13598 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0127 20:21:11.689527   13598 out.go:239] * 
	* 
	W0127 20:21:11.690788   13598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:21:11.753335   13598 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (158.322077ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:11.933398   13705 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (86.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-272000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (138.382164ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:20:57.511760   13677 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-272000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (34.855451ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-272000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-272000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (146.397532ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:20:57.693603   13683 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (59.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-272000 "sudo crictl images -o json"
E0127 20:21:10.599451    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-272000 "sudo crictl images -o json": exit status 1 (59.627525323s)

                                                
                                                
-- stdout --
	FATA[0059] listing images: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.41/images/json": read unix @->/var/run/docker.sock: read: connection reset by peer 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-272000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0059] listing images: rpc error: code = Unknown desc = error during connect: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.41/images/json": read unix @->/var/run/docker.sock: read: connection reset by peer 
start_stop_delete_test.go:304: v1.26.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.6-0",
- 	"registry.k8s.io/kube-apiserver:v1.26.1",
- 	"registry.k8s.io/kube-controller-manager:v1.26.1",
- 	"registry.k8s.io/kube-proxy:v1.26.1",
- 	"registry.k8s.io/kube-scheduler:v1.26.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (155.564259ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:57.470251   13815 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (59.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-159000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (151.978327ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:12.086898   13710 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-159000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (35.041574ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-159000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-159000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (148.456691ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:12.270967   13716 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 "sudo crictl images -o json": exit status 1 (2.14451223s)

                                                
                                                
-- stdout --
	FATA[0002] connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-159000 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0002] connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (149.516543ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:14.565600   13733 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-159000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-159000 --alsologtostderr -v=1: exit status 80 (1.883183573s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-159000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:21:14.629770   13738 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:21:14.630656   13738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:21:14.630663   13738 out.go:309] Setting ErrFile to fd 2...
	I0127 20:21:14.630667   13738 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:21:14.630769   13738 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:21:14.631083   13738 out.go:303] Setting JSON to false
	I0127 20:21:14.631100   13738 mustload.go:65] Loading cluster: old-k8s-version-159000
	I0127 20:21:14.631359   13738 config.go:180] Loaded profile config "old-k8s-version-159000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0127 20:21:14.631737   13738 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:14.631785   13738 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:14.638785   13738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56352
	I0127 20:21:14.639771   13738 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:14.640210   13738 main.go:141] libmachine: Using API Version  1
	I0127 20:21:14.640228   13738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:14.640440   13738 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:14.640555   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetState
	I0127 20:21:14.640647   13738 main.go:141] libmachine: (old-k8s-version-159000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:21:14.640718   13738 main.go:141] libmachine: (old-k8s-version-159000) DBG | hyperkit pid from json: 13617
	I0127 20:21:14.641631   13738 host.go:66] Checking if "old-k8s-version-159000" exists ...
	I0127 20:21:14.641874   13738 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:14.641910   13738 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:14.648615   13738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56354
	I0127 20:21:14.648971   13738 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:14.649295   13738 main.go:141] libmachine: Using API Version  1
	I0127 20:21:14.649311   13738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:14.649510   13738 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:14.649603   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:21:14.650206   13738 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.29.0-1674856271-15565/minikube-v1.29.0-1674856271-15565-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.29.0-1674856271-15565-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) me
mory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/Users:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-159000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirt
ualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0127 20:21:14.694255   13738 out.go:177] * Pausing node old-k8s-version-159000 ... 
	I0127 20:21:14.715322   13738 host.go:66] Checking if "old-k8s-version-159000" exists ...
	I0127 20:21:14.715922   13738 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:14.715968   13738 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:14.724409   13738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56356
	I0127 20:21:14.724787   13738 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:14.725150   13738 main.go:141] libmachine: Using API Version  1
	I0127 20:21:14.725162   13738 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:14.725384   13738 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:14.725503   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .DriverName
	I0127 20:21:14.725652   13738 ssh_runner.go:195] Run: systemctl --version
	I0127 20:21:14.725669   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHHostname
	I0127 20:21:14.725739   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHPort
	I0127 20:21:14.725816   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHKeyPath
	I0127 20:21:14.725911   13738 main.go:141] libmachine: (old-k8s-version-159000) Calling .GetSSHUsername
	I0127 20:21:14.725998   13738 sshutil.go:53] new ssh client: &{IP:192.168.64.40 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/old-k8s-version-159000/id_rsa Username:docker}
	I0127 20:21:14.766325   13738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:14.775076   13738 pause.go:51] kubelet running: false
	I0127 20:21:14.775126   13738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:14.784633   13738 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:15.061205   13738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:15.073060   13738 pause.go:51] kubelet running: false
	I0127 20:21:15.073111   13738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:15.083250   13738 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:15.625330   13738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:15.637123   13738 pause.go:51] kubelet running: false
	I0127 20:21:15.637188   13738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:15.647782   13738 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:16.303548   13738 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:16.313686   13738 pause.go:51] kubelet running: false
	I0127 20:21:16.313764   13738 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:16.347009   13738 out.go:177] 
	W0127 20:21:16.368870   13738 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0127 20:21:16.368896   13738 out.go:239] * 
	* 
	W0127 20:21:16.373568   13738 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:21:16.433869   13738 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p old-k8s-version-159000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
E0127 20:21:16.579001    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (149.118398ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:16.598708   13743 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 6 (149.862821ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:16.749583   13748 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-159000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-159000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (2.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-272000 --alsologtostderr -v=1
E0127 20:21:57.538313    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-272000 --alsologtostderr -v=1: exit status 80 (1.914312725s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-272000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 20:21:57.542012   13820 out.go:296] Setting OutFile to fd 1 ...
	I0127 20:21:57.542263   13820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:21:57.542268   13820 out.go:309] Setting ErrFile to fd 2...
	I0127 20:21:57.542272   13820 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 20:21:57.542410   13820 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 20:21:57.542757   13820 out.go:303] Setting JSON to false
	I0127 20:21:57.542774   13820 mustload.go:65] Loading cluster: no-preload-272000
	I0127 20:21:57.543058   13820 config.go:180] Loaded profile config "no-preload-272000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 20:21:57.543422   13820 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:57.543480   13820 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:57.550443   13820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56419
	I0127 20:21:57.550828   13820 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:57.551290   13820 main.go:141] libmachine: Using API Version  1
	I0127 20:21:57.551305   13820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:57.551532   13820 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:57.551637   13820 main.go:141] libmachine: (no-preload-272000) Calling .GetState
	I0127 20:21:57.551725   13820 main.go:141] libmachine: (no-preload-272000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 20:21:57.551810   13820 main.go:141] libmachine: (no-preload-272000) DBG | hyperkit pid from json: 13586
	I0127 20:21:57.552711   13820 host.go:66] Checking if "no-preload-272000" exists ...
	I0127 20:21:57.552990   13820 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:57.553013   13820 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:57.560080   13820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56421
	I0127 20:21:57.560445   13820 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:57.560778   13820 main.go:141] libmachine: Using API Version  1
	I0127 20:21:57.560797   13820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:57.561011   13820 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:57.561126   13820 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:21:57.561761   13820 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks
:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.29.0-1674856271-15565/minikube-v1.29.0-1674856271-15565-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.29.0-1674856271-15565-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) me
mory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/Users:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-272000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 socket-vmnet-client-path:/opt/socket_vmnet/bin/socket_vmnet_client socket-vmnet-path:/var/run/socket_vmnet ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualbo
xdriverwarning:%!s(bool=true)]="(MISSING)"
	I0127 20:21:57.583149   13820 out.go:177] * Pausing node no-preload-272000 ... 
	I0127 20:21:57.624944   13820 host.go:66] Checking if "no-preload-272000" exists ...
	I0127 20:21:57.625303   13820 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 20:21:57.625331   13820 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 20:21:57.632371   13820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:56423
	I0127 20:21:57.632747   13820 main.go:141] libmachine: () Calling .GetVersion
	I0127 20:21:57.633100   13820 main.go:141] libmachine: Using API Version  1
	I0127 20:21:57.633114   13820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 20:21:57.633365   13820 main.go:141] libmachine: () Calling .GetMachineName
	I0127 20:21:57.633489   13820 main.go:141] libmachine: (no-preload-272000) Calling .DriverName
	I0127 20:21:57.633653   13820 ssh_runner.go:195] Run: systemctl --version
	I0127 20:21:57.633672   13820 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHHostname
	I0127 20:21:57.633751   13820 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHPort
	I0127 20:21:57.633833   13820 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHKeyPath
	I0127 20:21:57.633909   13820 main.go:141] libmachine: (no-preload-272000) Calling .GetSSHUsername
	I0127 20:21:57.633989   13820 sshutil.go:53] new ssh client: &{IP:192.168.64.41 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/no-preload-272000/id_rsa Username:docker}
	I0127 20:21:57.669284   13820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:57.677795   13820 pause.go:51] kubelet running: false
	I0127 20:21:57.677859   13820 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:57.687048   13820 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:57.964188   13820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:57.973491   13820 pause.go:51] kubelet running: false
	I0127 20:21:57.973563   13820 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:57.983250   13820 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:58.523581   13820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:58.534070   13820 pause.go:51] kubelet running: false
	I0127 20:21:58.534133   13820 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:58.543663   13820 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I0127 20:21:59.200944   13820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 20:21:59.211588   13820 pause.go:51] kubelet running: false
	I0127 20:21:59.211646   13820 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0127 20:21:59.259945   13820 out.go:177] 
	W0127 20:21:59.280988   13820 out.go:239] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W0127 20:21:59.281004   13820 out.go:239] * 
	* 
	W0127 20:21:59.284231   13820 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0127 20:21:59.359995   13820 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p no-preload-272000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (140.910137ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:59.531763   13829 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 6 (146.574425ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 20:21:59.678541   13834 status.go:415] kubeconfig endpoint: extract IP: "no-preload-272000" does not appear in /Users/jenkins/minikube-integration/15565-3235/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-272000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (2.20s)

                                                
                                    

Test pass (266/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.19
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.33
10 TestDownloadOnly/v1.26.1/json-events 6.61
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.42
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
19 TestBinaryMirror 1.01
20 TestOffline 59.98
22 TestAddons/Setup 135.83
24 TestAddons/parallel/Registry 15.48
25 TestAddons/parallel/Ingress 20.47
26 TestAddons/parallel/MetricsServer 5.54
27 TestAddons/parallel/HelmTiller 10.53
29 TestAddons/parallel/CSI 40.76
30 TestAddons/parallel/Headlamp 11.46
31 TestAddons/parallel/CloudSpanner 5.32
34 TestAddons/serial/GCPAuth/Namespaces 0.09
35 TestAddons/StoppedEnableDisable 8.59
36 TestCertOptions 41.45
38 TestDockerFlags 50.37
39 TestForceSystemdFlag 44.36
40 TestForceSystemdEnv 43.51
42 TestHyperKitDriverInstallOrUpdate 10.98
46 TestErrorSpam/start 1.56
47 TestErrorSpam/status 0.48
48 TestErrorSpam/pause 1.19
49 TestErrorSpam/unpause 1.32
50 TestErrorSpam/stop 3.63
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 58.29
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 70.63
57 TestFunctional/serial/KubeContext 0.03
58 TestFunctional/serial/KubectlGetPods 0.05
61 TestFunctional/serial/CacheCmd/cache/add_remote 6.66
62 TestFunctional/serial/CacheCmd/cache/add_local 1.5
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.16
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
67 TestFunctional/serial/CacheCmd/cache/delete 0.16
68 TestFunctional/serial/MinikubeKubectlCmd 0.52
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.68
70 TestFunctional/serial/ExtraConfig 45.99
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 2.65
73 TestFunctional/serial/LogsFileCmd 2.82
75 TestFunctional/parallel/ConfigCmd 0.5
76 TestFunctional/parallel/DashboardCmd 8.43
77 TestFunctional/parallel/DryRun 0.96
78 TestFunctional/parallel/InternationalLanguage 0.53
79 TestFunctional/parallel/StatusCmd 0.48
82 TestFunctional/parallel/ServiceCmd 11.22
83 TestFunctional/parallel/ServiceCmdConnect 7.56
84 TestFunctional/parallel/AddonsCmd 0.3
85 TestFunctional/parallel/PersistentVolumeClaim 24.62
87 TestFunctional/parallel/SSHCmd 0.28
88 TestFunctional/parallel/CpCmd 0.61
89 TestFunctional/parallel/MySQL 21.8
90 TestFunctional/parallel/FileSync 0.17
91 TestFunctional/parallel/CertSync 1.08
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
99 TestFunctional/parallel/License 0.53
100 TestFunctional/parallel/Version/short 0.1
101 TestFunctional/parallel/Version/components 0.62
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.17
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
106 TestFunctional/parallel/ImageCommands/ImageBuild 3.32
107 TestFunctional/parallel/ImageCommands/Setup 2.63
108 TestFunctional/parallel/DockerEnv/bash 0.74
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.99
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.08
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.31
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.23
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.55
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.05
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.14
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
130 TestFunctional/parallel/ProfileCmd/profile_list 0.29
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
132 TestFunctional/parallel/MountCmd/any-port 6.87
133 TestFunctional/parallel/MountCmd/specific-port 1.32
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestIngressAddonLegacy/StartLegacyK8sCluster 72.35
142 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.84
143 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.46
144 TestIngressAddonLegacy/serial/ValidateIngressAddons 34.74
147 TestJSONOutput/start/Command 54.07
148 TestJSONOutput/start/Audit 0
150 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/pause/Command 0.48
154 TestJSONOutput/pause/Audit 0
156 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/unpause/Command 0.44
160 TestJSONOutput/unpause/Audit 0
162 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/stop/Command 8.16
166 TestJSONOutput/stop/Audit 0
168 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
170 TestErrorJSONOutput 0.75
175 TestMainNoArgs 0.08
176 TestMinikubeProfile 92.43
179 TestMountStart/serial/StartWithMountFirst 15.04
180 TestMountStart/serial/VerifyMountFirst 0.28
181 TestMountStart/serial/StartWithMountSecond 14.95
182 TestMountStart/serial/VerifyMountSecond 0.3
183 TestMountStart/serial/DeleteFirst 2.38
184 TestMountStart/serial/VerifyMountPostDelete 0.29
185 TestMountStart/serial/Stop 2.22
186 TestMountStart/serial/RestartStopped 16.59
187 TestMountStart/serial/VerifyMountPostStop 0.31
190 TestMultiNode/serial/FreshStart2Nodes 100.05
191 TestMultiNode/serial/DeployApp2Nodes 4.89
192 TestMultiNode/serial/PingHostFrom2Pods 0.87
193 TestMultiNode/serial/AddNode 36.74
194 TestMultiNode/serial/ProfileList 0.22
195 TestMultiNode/serial/CopyFile 5.21
196 TestMultiNode/serial/StopNode 2.69
197 TestMultiNode/serial/StartAfterStop 29.75
198 TestMultiNode/serial/RestartKeepsNodes 127.86
199 TestMultiNode/serial/DeleteNode 3.01
200 TestMultiNode/serial/StopMultiNode 16.49
201 TestMultiNode/serial/RestartMultiNode 79.07
202 TestMultiNode/serial/ValidateNameConflict 45.33
206 TestPreload 194.44
208 TestScheduledStopUnix 112.03
209 TestSkaffold 77.25
212 TestRunningBinaryUpgrade 172.7
214 TestKubernetesUpgrade 160.32
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.91
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 5.97
229 TestStoppedBinaryUpgrade/Setup 0.72
230 TestStoppedBinaryUpgrade/Upgrade 156.35
231 TestStoppedBinaryUpgrade/MinikubeLogs 3.37
233 TestPause/serial/Start 57.11
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
243 TestNoKubernetes/serial/StartWithK8s 49.48
244 TestNoKubernetes/serial/StartWithStopK8s 16.94
245 TestPause/serial/SecondStartNoReconfiguration 40.93
246 TestNoKubernetes/serial/Start 15.52
247 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
248 TestNoKubernetes/serial/ProfileList 0.55
249 TestNoKubernetes/serial/Stop 2.24
250 TestNoKubernetes/serial/StartNoArgs 15.77
251 TestPause/serial/Pause 0.55
252 TestPause/serial/VerifyStatus 0.16
253 TestPause/serial/Unpause 0.51
254 TestPause/serial/PauseAgain 0.62
255 TestPause/serial/DeletePaused 5.27
256 TestPause/serial/VerifyDeletedResources 0.22
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.12
259 TestNetworkPlugins/group/kindnet/Start 72.22
260 TestNetworkPlugins/group/calico/Start 71.36
261 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
262 TestNetworkPlugins/group/kindnet/KubeletFlags 0.16
263 TestNetworkPlugins/group/kindnet/NetCatPod 14.2
264 TestNetworkPlugins/group/kindnet/DNS 0.12
265 TestNetworkPlugins/group/kindnet/Localhost 0.11
266 TestNetworkPlugins/group/kindnet/HairPin 0.11
267 TestNetworkPlugins/group/calico/ControllerPod 5.02
268 TestNetworkPlugins/group/custom-flannel/Start 67.63
269 TestNetworkPlugins/group/calico/KubeletFlags 0.19
270 TestNetworkPlugins/group/calico/NetCatPod 15.23
271 TestNetworkPlugins/group/calico/DNS 0.13
272 TestNetworkPlugins/group/calico/Localhost 0.11
273 TestNetworkPlugins/group/calico/HairPin 0.11
274 TestNetworkPlugins/group/false/Start 68.84
275 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
276 TestNetworkPlugins/group/custom-flannel/NetCatPod 19.17
277 TestNetworkPlugins/group/custom-flannel/DNS 0.12
278 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
279 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
280 TestNetworkPlugins/group/false/KubeletFlags 0.19
281 TestNetworkPlugins/group/false/NetCatPod 15.17
282 TestNetworkPlugins/group/enable-default-cni/Start 55.61
283 TestNetworkPlugins/group/false/DNS 0.12
284 TestNetworkPlugins/group/false/Localhost 0.13
285 TestNetworkPlugins/group/false/HairPin 0.11
286 TestNetworkPlugins/group/flannel/Start 60.63
287 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.15
288 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.24
289 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
290 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
291 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
292 TestNetworkPlugins/group/bridge/Start 62.59
293 TestNetworkPlugins/group/flannel/ControllerPod 5.01
294 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
295 TestNetworkPlugins/group/flannel/NetCatPod 16.16
296 TestNetworkPlugins/group/flannel/DNS 0.13
297 TestNetworkPlugins/group/flannel/Localhost 0.11
298 TestNetworkPlugins/group/flannel/HairPin 0.11
299 TestNetworkPlugins/group/kubenet/Start 54.9
300 TestNetworkPlugins/group/bridge/KubeletFlags 0.15
301 TestNetworkPlugins/group/bridge/NetCatPod 15.16
302 TestNetworkPlugins/group/bridge/DNS 0.12
303 TestNetworkPlugins/group/bridge/Localhost 0.1
304 TestNetworkPlugins/group/bridge/HairPin 0.1
306 TestStartStop/group/old-k8s-version/serial/FirstStart 140.23
307 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
308 TestNetworkPlugins/group/kubenet/NetCatPod 15.17
309 TestNetworkPlugins/group/kubenet/DNS 0.13
310 TestNetworkPlugins/group/kubenet/Localhost 0.11
311 TestNetworkPlugins/group/kubenet/HairPin 0.11
313 TestStartStop/group/no-preload/serial/FirstStart 99.76
314 TestStartStop/group/no-preload/serial/DeployApp 8.23
315 TestStartStop/group/old-k8s-version/serial/DeployApp 8.27
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.73
317 TestStartStop/group/no-preload/serial/Stop 8.29
318 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.63
319 TestStartStop/group/old-k8s-version/serial/Stop 8.25
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.02
335 TestStartStop/group/newest-cni/serial/FirstStart 50.28
336 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.6
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.27
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.78
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.66
343 TestStartStop/group/newest-cni/serial/Stop 8.3
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
345 TestStartStop/group/newest-cni/serial/SecondStart 38.34
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.18
349 TestStartStop/group/newest-cni/serial/Pause 1.84
351 TestStartStop/group/embed-certs/serial/FirstStart 54.39
352 TestStartStop/group/embed-certs/serial/DeployApp 10.29
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.67
354 TestStartStop/group/embed-certs/serial/Stop 8.3
355 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
356 TestStartStop/group/embed-certs/serial/SecondStart 298.14
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.01
358 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
359 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
360 TestStartStop/group/default-k8s-diff-port/serial/Pause 1.9
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.01
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.18
364 TestStartStop/group/embed-certs/serial/Pause 1.83
x
+
TestDownloadOnly/v1.16.0/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperkit : (10.190413123s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-953000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-953000: exit status 85 (333.452069ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-953000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 19:30:11
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 19:30:11.651394    4444 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:30:11.651633    4444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:11.651638    4444 out.go:309] Setting ErrFile to fd 2...
	I0127 19:30:11.651642    4444 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:11.651747    4444 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	W0127 19:30:11.651845    4444 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3235/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3235/.minikube/config/config.json: no such file or directory
	I0127 19:30:11.652521    4444 out.go:303] Setting JSON to true
	I0127 19:30:11.670936    4444 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1786,"bootTime":1674874825,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 19:30:11.671022    4444 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:30:11.693311    4444 out.go:97] [download-only-953000] minikube v1.28.0 on Darwin 13.2
	I0127 19:30:11.693539    4444 notify.go:220] Checking for updates...
	W0127 19:30:11.693597    4444 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball: no such file or directory
	I0127 19:30:11.714826    4444 out.go:169] MINIKUBE_LOCATION=15565
	I0127 19:30:11.737290    4444 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 19:30:11.759183    4444 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:30:11.780779    4444 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:30:11.802072    4444 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	W0127 19:30:11.844681    4444 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 19:30:11.845131    4444 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:30:11.933867    4444 out.go:97] Using the hyperkit driver based on user configuration
	I0127 19:30:11.933915    4444 start.go:296] selected driver: hyperkit
	I0127 19:30:11.933929    4444 start.go:840] validating driver "hyperkit" against <nil>
	I0127 19:30:11.934042    4444 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 19:30:11.934324    4444 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 19:30:12.071466    4444 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 19:30:12.075300    4444 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:30:12.075316    4444 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 19:30:12.075347    4444 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0127 19:30:12.079850    4444 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0127 19:30:12.079959    4444 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 19:30:12.079988    4444 cni.go:84] Creating CNI manager for ""
	I0127 19:30:12.080004    4444 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0127 19:30:12.080013    4444 start_flags.go:319] config:
	{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:30:12.080232    4444 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 19:30:12.101926    4444 out.go:97] Downloading VM boot image ...
	I0127 19:30:12.102104    4444 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/iso/amd64/minikube-v1.29.0-1674856271-15565-amd64.iso
	I0127 19:30:16.289106    4444 out.go:97] Starting control plane node download-only-953000 in cluster download-only-953000
	I0127 19:30:16.289190    4444 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:16.343374    4444 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 19:30:16.343406    4444 cache.go:57] Caching tarball of preloaded images
	I0127 19:30:16.343740    4444 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:16.365182    4444 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0127 19:30:16.365234    4444 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:16.498838    4444 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0127 19:30:20.390568    4444 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:20.390710    4444 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:20.938678    4444 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0127 19:30:20.938904    4444 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/download-only-953000/config.json ...
	I0127 19:30:20.938931    4444 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/download-only-953000/config.json: {Name:mk6b80d5d1d2b4fbce7edac200358fc204378423 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 19:30:20.939200    4444 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0127 19:30:20.939487    4444 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-953000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (6.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-953000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=hyperkit : (6.613862903s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (6.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-953000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-953000: exit status 85 (294.264253ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-953000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-953000 | jenkins | v1.28.0 | 27 Jan 23 19:30 PST |          |
	|         | -p download-only-953000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/27 19:30:22
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 19:30:22.174912    4468 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:30:22.175073    4468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:22.175079    4468 out.go:309] Setting ErrFile to fd 2...
	I0127 19:30:22.175083    4468 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:30:22.175197    4468 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	W0127 19:30:22.175291    4468 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-3235/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-3235/.minikube/config/config.json: no such file or directory
	I0127 19:30:22.175626    4468 out.go:303] Setting JSON to true
	I0127 19:30:22.194982    4468 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1797,"bootTime":1674874825,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 19:30:22.195061    4468 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:30:22.217024    4468 out.go:97] [download-only-953000] minikube v1.28.0 on Darwin 13.2
	I0127 19:30:22.217116    4468 notify.go:220] Checking for updates...
	I0127 19:30:22.237982    4468 out.go:169] MINIKUBE_LOCATION=15565
	I0127 19:30:22.258914    4468 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 19:30:22.279722    4468 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:30:22.303265    4468 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:30:22.325267    4468 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	W0127 19:30:22.367965    4468 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 19:30:22.368662    4468 config.go:180] Loaded profile config "download-only-953000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0127 19:30:22.368758    4468 start.go:748] api.Load failed for download-only-953000: filestore "download-only-953000": Docker machine "download-only-953000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0127 19:30:22.368840    4468 driver.go:365] Setting default libvirt URI to qemu:///system
	W0127 19:30:22.368880    4468 start.go:748] api.Load failed for download-only-953000: filestore "download-only-953000": Docker machine "download-only-953000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0127 19:30:22.396856    4468 out.go:97] Using the hyperkit driver based on existing profile
	I0127 19:30:22.396905    4468 start.go:296] selected driver: hyperkit
	I0127 19:30:22.396916    4468 start.go:840] validating driver "hyperkit" against &{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-953000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:30:22.397185    4468 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 19:30:22.397371    4468 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3235/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0127 19:30:22.404886    4468 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
	I0127 19:30:22.408184    4468 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:30:22.408221    4468 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0127 19:30:22.410433    4468 cni.go:84] Creating CNI manager for ""
	I0127 19:30:22.410455    4468 cni.go:157] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0127 19:30:22.410470    4468 start_flags.go:319] config:
	{Name:download-only-953000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-953000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:30:22.410609    4468 iso.go:125] acquiring lock: {Name:mkeeb6f52f7fa0577f04180383dbb7ed67f33d88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 19:30:22.431746    4468 out.go:97] Starting control plane node download-only-953000 in cluster download-only-953000
	I0127 19:30:22.431772    4468 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 19:30:22.492044    4468 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 19:30:22.492062    4468 cache.go:57] Caching tarball of preloaded images
	I0127 19:30:22.492242    4468 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0127 19:30:22.514060    4468 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0127 19:30:22.514086    4468 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:22.590677    4468 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0127 19:30:27.017792    4468 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0127 19:30:27.017976    4468 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-3235/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-953000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.42s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-953000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-033000 --alsologtostderr --binary-mirror http://127.0.0.1:49399 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-033000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-033000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestOffline (59.98s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-310000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-310000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (54.710958896s)
helpers_test.go:175: Cleaning up "offline-docker-310000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-310000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-310000: (5.269434913s)
--- PASS: TestOffline (59.98s)

                                                
                                    
x
+
TestAddons/Setup (135.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-113000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-113000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m15.829705019s)
--- PASS: TestAddons/Setup (135.83s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 10.940882ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-77s6g" [5a2cc6b5-5b8c-40f8-baef-fba00baf14d3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006555178s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k9j4z" [fac77c28-318f-40f3-92a5-30ab7c21a0b4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015864978s
addons_test.go:305: (dbg) Run:  kubectl --context addons-113000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-113000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-113000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.866706925s)
addons_test.go:324: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 ip
2023/01/27 19:33:02 [DEBUG] GET http://192.168.64.2:5000
addons_test.go:353: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-113000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-113000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-113000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4237c40c-ca0d-422d-9697-0104b3b48657] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [4237c40c-ca0d-422d-9697-0104b3b48657] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010575953s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-113000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.64.2
addons_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-darwin-amd64 -p addons-113000 addons disable ingress-dns --alsologtostderr -v=1: (1.055331982s)
addons_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:276: (dbg) Done: out/minikube-darwin-amd64 -p addons-113000 addons disable ingress --alsologtostderr -v=1: (7.44683411s)
--- PASS: TestAddons/parallel/Ingress (20.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 1.838819ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-4fcwv" [48a67406-b978-4975-afad-d19c8d6fe0c3] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007130692s
addons_test.go:380: (dbg) Run:  kubectl --context addons-113000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.53s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 1.496353ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-s9mh7" [6954db3d-f227-43d8-9ffb-cfc5f0723550] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007081541s
addons_test.go:438: (dbg) Run:  kubectl --context addons-113000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-113000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.136653221s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.729321ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-113000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-113000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-113000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-113000 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [54bc582f-b937-4caf-8ae2-e0e379bde4b5] Pending
helpers_test.go:344: "task-pv-pod" [54bc582f-b937-4caf-8ae2-e0e379bde4b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [54bc582f-b937-4caf-8ae2-e0e379bde4b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.012724815s
addons_test.go:549: (dbg) Run:  kubectl --context addons-113000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-113000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:419: (dbg) Run:  kubectl --context addons-113000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-113000 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-113000 delete pod task-pv-pod: (1.041135505s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-113000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-113000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-113000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-113000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [55c1b655-000c-42fd-8087-f0c24071b33d] Pending
helpers_test.go:344: "task-pv-pod-restore" [55c1b655-000c-42fd-8087-f0c24071b33d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod-restore" [55c1b655-000c-42fd-8087-f0c24071b33d] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.017264156s
addons_test.go:591: (dbg) Run:  kubectl --context addons-113000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-113000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-113000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable csi-hostpath-driver --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-113000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.600655758s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-113000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (40.76s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-113000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-113000 --alsologtostderr -v=1: (1.451039685s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-25kw9" [59091456-5f50-4387-9935-f7c14e3dde93] Pending
helpers_test.go:344: "headlamp-5759877c79-25kw9" [59091456-5f50-4387-9935-f7c14e3dde93] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-25kw9" [59091456-5f50-4387-9935-f7c14e3dde93] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.009350871s
--- PASS: TestAddons/parallel/Headlamp (11.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:344: "cloud-spanner-emulator-5dcf58dbbb-9qq9q" [28eeb53e-800f-474b-b3d5-c03402143405] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005386375s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-113000
--- PASS: TestAddons/parallel/CloudSpanner (5.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-113000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-113000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (8.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-113000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-113000: (8.218739431s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-113000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-113000
--- PASS: TestAddons/StoppedEnableDisable (8.59s)

                                                
                                    
x
+
TestCertOptions (41.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-460000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-460000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (37.639369569s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-460000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-460000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-460000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-460000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-460000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-460000: (3.448098948s)
--- PASS: TestCertOptions (41.45s)

                                                
                                    
x
+
TestDockerFlags (50.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-643000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-643000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (44.758845193s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-643000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-643000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-643000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-643000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-643000: (5.29208172s)
--- PASS: TestDockerFlags (50.37s)

                                                
                                    
x
+
TestForceSystemdFlag (44.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-814000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-814000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (38.846155854s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-814000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-814000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-814000
E0127 20:02:08.596722    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-814000: (5.342945195s)
--- PASS: TestForceSystemdFlag (44.36s)

                                                
                                    
x
+
TestForceSystemdEnv (43.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-631000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-631000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (39.864255902s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-631000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-631000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-631000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-631000: (3.472070824s)
--- PASS: TestForceSystemdEnv (43.51s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.98s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.98s)

                                                
                                    
x
+
TestErrorSpam/start (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 start --dry-run
--- PASS: TestErrorSpam/start (1.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.48s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 status
--- PASS: TestErrorSpam/status (0.48s)

                                                
                                    
x
+
TestErrorSpam/pause (1.19s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 pause
--- PASS: TestErrorSpam/pause (1.19s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 unpause
--- PASS: TestErrorSpam/unpause (1.32s)

                                                
                                    
x
+
TestErrorSpam/stop (3.63s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 stop: (3.213890789s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-259000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-259000 stop
--- PASS: TestErrorSpam/stop (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-3235/.minikube/files/etc/test/nested/copy/4442/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-093000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (58.292445943s)
--- PASS: TestFunctional/serial/StartWithProxy (58.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (70.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-093000 --alsologtostderr -v=8: (1m10.628782263s)
functional_test.go:656: soft start took 1m10.629337458s for "functional-093000" cluster.
--- PASS: TestFunctional/serial/SoftStart (70.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-093000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:3.1: (2.234092764s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:3.3: (2.289634384s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 cache add k8s.gcr.io/pause:latest: (2.140427535s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local2587717124/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache add minikube-local-cache-test:functional-093000
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache delete minikube-local-cache-test:functional-093000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-093000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (133.191591ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 cache reload: (1.251082919s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 kubectl -- --context functional-093000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-093000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.68s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 19:37:47.057717    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.063498    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.073626    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.094167    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.135925    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.216731    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.378437    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:47.698839    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:48.339334    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:49.619521    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:52.180634    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:37:57.300723    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-093000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.990424477s)
functional_test.go:754: restart took 45.990599851s for "functional-093000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-093000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 logs: (2.645512606s)
--- PASS: TestFunctional/serial/LogsCmd (2.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd4080915032/001/logs.txt
E0127 19:38:07.542210    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd4080915032/001/logs.txt: (2.816386546s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 config get cpus: exit status 14 (65.355073ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 config get cpus: exit status 14 (58.428102ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-093000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-093000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 6495: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-093000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (551.076396ms)

                                                
                                                
-- stdout --
	* [functional-093000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:39:01.581038    6457 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:39:01.581282    6457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:01.581287    6457 out.go:309] Setting ErrFile to fd 2...
	I0127 19:39:01.581291    6457 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:01.581401    6457 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 19:39:01.581905    6457 out.go:303] Setting JSON to false
	I0127 19:39:01.600847    6457 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2316,"bootTime":1674874825,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 19:39:01.600950    6457 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:39:01.623456    6457 out.go:177] * [functional-093000] minikube v1.28.0 on Darwin 13.2
	I0127 19:39:01.666378    6457 notify.go:220] Checking for updates...
	I0127 19:39:01.688196    6457 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 19:39:01.710204    6457 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 19:39:01.731152    6457 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:39:01.754147    6457 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:39:01.774295    6457 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 19:39:01.817203    6457 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 19:39:01.859748    6457 config.go:180] Loaded profile config "functional-093000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:39:01.860421    6457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:39:01.860494    6457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:39:01.868209    6457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50453
	I0127 19:39:01.868582    6457 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:39:01.868989    6457 main.go:141] libmachine: Using API Version  1
	I0127 19:39:01.869002    6457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:39:01.869190    6457 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:39:01.869288    6457 main.go:141] libmachine: (functional-093000) Calling .DriverName
	I0127 19:39:01.869403    6457 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:39:01.869665    6457 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:39:01.869688    6457 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:39:01.876221    6457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50455
	I0127 19:39:01.876590    6457 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:39:01.876924    6457 main.go:141] libmachine: Using API Version  1
	I0127 19:39:01.876939    6457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:39:01.877141    6457 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:39:01.877236    6457 main.go:141] libmachine: (functional-093000) Calling .DriverName
	I0127 19:39:01.905220    6457 out.go:177] * Using the hyperkit driver based on existing profile
	I0127 19:39:01.963994    6457 start.go:296] selected driver: hyperkit
	I0127 19:39:01.964012    6457 start.go:840] validating driver "hyperkit" against &{Name:functional-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-093000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:39:01.964142    6457 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 19:39:01.988450    6457 out.go:177] 
	W0127 19:39:02.010549    6457 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0127 19:39:02.032081    6457 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-093000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-093000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (531.568324ms)

                                                
                                                
-- stdout --
	* [functional-093000] minikube v1.28.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:39:02.534606    6473 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:39:02.534767    6473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:02.534774    6473 out.go:309] Setting ErrFile to fd 2...
	I0127 19:39:02.534778    6473 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:39:02.534897    6473 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 19:39:02.535332    6473 out.go:303] Setting JSON to false
	I0127 19:39:02.554217    6473 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2317,"bootTime":1674874825,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0127 19:39:02.554313    6473 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0127 19:39:02.576126    6473 out.go:177] * [functional-093000] minikube v1.28.0 sur Darwin 13.2
	I0127 19:39:02.617930    6473 notify.go:220] Checking for updates...
	I0127 19:39:02.639802    6473 out.go:177]   - MINIKUBE_LOCATION=15565
	I0127 19:39:02.661015    6473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	I0127 19:39:02.682099    6473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0127 19:39:02.704306    6473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 19:39:02.726036    6473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	I0127 19:39:02.746989    6473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 19:39:02.768764    6473 config.go:180] Loaded profile config "functional-093000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:39:02.769391    6473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:39:02.769470    6473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:39:02.777306    6473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50467
	I0127 19:39:02.777675    6473 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:39:02.778078    6473 main.go:141] libmachine: Using API Version  1
	I0127 19:39:02.778089    6473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:39:02.778323    6473 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:39:02.778423    6473 main.go:141] libmachine: (functional-093000) Calling .DriverName
	I0127 19:39:02.778544    6473 driver.go:365] Setting default libvirt URI to qemu:///system
	I0127 19:39:02.778802    6473 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:39:02.778836    6473 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:39:02.785465    6473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50469
	I0127 19:39:02.785798    6473 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:39:02.786150    6473 main.go:141] libmachine: Using API Version  1
	I0127 19:39:02.786164    6473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:39:02.786385    6473 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:39:02.786485    6473 main.go:141] libmachine: (functional-093000) Calling .DriverName
	I0127 19:39:02.813920    6473 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0127 19:39:02.856124    6473 start.go:296] selected driver: hyperkit
	I0127 19:39:02.856150    6473 start.go:840] validating driver "hyperkit" against &{Name:functional-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.29.0-1674856271-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1674164627-15541@sha256:0a2280301e955e0d3910d6e639e0b7341db1f4a25558521ac97b38c782c6189a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-093000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.64.4 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0127 19:39:02.856311    6473 start.go:851] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 19:39:02.895829    6473 out.go:177] 
	W0127 19:39:02.917157    6473 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0127 19:39:02.939038    6473 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-093000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-093000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-wdl67" [ce7c018f-07ea-4ec6-912c-5e175635f6a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0127 19:38:28.022698    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:344: "hello-node-6fddd6858d-wdl67" [ce7c018f-07ea-4ec6-912c-5e175635f6a1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.008091113s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.64.4:31863
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 service hello-node --url
functional_test.go:1511: found endpoint for hello-node: http://192.168.64.4:31863
--- PASS: TestFunctional/parallel/ServiceCmd (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-093000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-093000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-v44qc" [3190f53f-e13a-4407-b701-5baaf5208f18] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-v44qc" [3190f53f-e13a-4407-b701-5baaf5208f18] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.007035883s
functional_test.go:1579: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.64.4:31232
functional_test.go:1605: http://192.168.64.4:31232: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-v44qc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.64.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.64.4:31232
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a4e3600a-e371-4909-a354-a2b5853bf6de] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007015425s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-093000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-093000 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-093000 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-093000 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [39b278c4-8336-4746-8107-a8e8570382cd] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [39b278c4-8336-4746-8107-a8e8570382cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [39b278c4-8336-4746-8107-a8e8570382cd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.008978238s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-093000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-093000 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-093000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1bfee538-8080-4b73-948e-d7b312c19af7] Pending
helpers_test.go:344: "sp-pod" [1bfee538-8080-4b73-948e-d7b312c19af7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [1bfee538-8080-4b73-948e-d7b312c19af7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009116288s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-093000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh -n functional-093000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 cp functional-093000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2490808518/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh -n functional-093000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-093000 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-wksjl" [45faa998-2e16-4fbc-9b3a-08eb7ab90c79] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-wksjl" [45faa998-2e16-4fbc-9b3a-08eb7ab90c79] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.008883197s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-093000 exec mysql-888f84dd9-wksjl -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-093000 exec mysql-888f84dd9-wksjl -- mysql -ppassword -e "show databases;": exit status 1 (163.847385ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-093000 exec mysql-888f84dd9-wksjl -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-093000 exec mysql-888f84dd9-wksjl -- mysql -ppassword -e "show databases;": exit status 1 (124.84748ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-093000 exec mysql-888f84dd9-wksjl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/4442/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /etc/test/nested/copy/4442/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/4442.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /etc/ssl/certs/4442.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/4442.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /usr/share/ca-certificates/4442.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/44422.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /etc/ssl/certs/44422.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/44422.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /usr/share/ca-certificates/44422.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-093000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh "sudo systemctl is-active crio": exit status 1 (122.002672ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-093000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-093000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-093000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-093000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-093000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | 9ec14ca3fec4d | 455MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-093000 | cc59c610d69c5 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-093000 | b0ec81bac6897 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
E0127 19:39:08.982880    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
2023/01/27 19:39:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-093000 image ls --format json:
[{"id":"9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"cc59c610d69c55accdbfb377907a4651c0608e61898563ec2d3c05e72e18e466","rep
oDigests":[],"repoTags":["docker.io/localhost/my-image:functional-093000"],"size":"1240000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-093000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller
-manager:v1.26.1"],"size":"124000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"b0ec81bac6897d1169abec7e0b335fd253b68f1ce7fb7e925f25cb5b098611e6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-093000"],"size":"30"},{"id":"655493523f6076092624c06fd5facf9541a9b3d5
4e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-093000 image ls --format yaml:
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: b0ec81bac6897d1169abec7e0b335fd253b68f1ce7fb7e925f25cb5b098611e6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-093000
size: "30"
- id: 9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-093000
size: "32900000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh pgrep buildkitd: exit status 1 (122.976349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image build -t localhost/my-image:functional-093000 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image build -t localhost/my-image:functional-093000 testdata/build: (3.019793059s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-093000 image build -t localhost/my-image:functional-093000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 194401cc3846
Removing intermediate container 194401cc3846
---> a162f0a5bc12
Step 3/3 : ADD content.txt /
---> cc59c610d69c
Successfully built cc59c610d69c
Successfully tagged localhost/my-image:functional-093000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.498148574s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-093000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.63s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-093000 docker-env) && out/minikube-darwin-amd64 status -p functional-093000"
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-093000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000: (2.827038939s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000: (1.913565184s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.995425591s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image load --daemon gcr.io/google-containers/addon-resizer:functional-093000: (3.087355843s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image save gcr.io/google-containers/addon-resizer:functional-093000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image save gcr.io/google-containers/addon-resizer:functional-093000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.232272436s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image rm gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image load /Users/jenkins/workspace/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.39719967s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 image save --daemon gcr.io/google-containers/addon-resizer:functional-093000
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-093000 image save --daemon gcr.io/google-containers/addon-resizer:functional-093000: (1.931561166s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-093000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-093000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-093000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fa9fab66-f7c4-4d6b-aa5e-a6a75f177362] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [fa9fab66-f7c4-4d6b-aa5e-a6a75f177362] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006829628s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-093000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.110.102.89 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:254: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:262: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:286: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:294: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:359: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-093000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "203.464506ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "81.92429ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "199.721605ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "84.122896ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3034412017/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674877133336631000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3034412017/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674877133336631000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3034412017/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674877133336631000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3034412017/001/test-1674877133336631000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.651652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 03:38 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 03:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 03:38 test-1674877133336631000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh cat /mount-9p/test-1674877133336631000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-093000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b600d648-8c83-4270-b38d-0ddcf1ccb366] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [b600d648-8c83-4270-b38d-0ddcf1ccb366] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [b600d648-8c83-4270-b38d-0ddcf1ccb366] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [b600d648-8c83-4270-b38d-0ddcf1ccb366] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006140965s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-093000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port3034412017/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1088084211/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.110312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1088084211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-093000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-093000 ssh "sudo umount -f /mount-9p": exit status 1 (121.509389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-093000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-093000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port1088084211/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-093000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-093000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-093000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (72.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-294000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-294000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperkit : (1m12.353017605s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (72.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons enable ingress --alsologtostderr -v=5: (11.84209852s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.46s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (34.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-294000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-294000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.052842558s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-294000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-294000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c172ec63-bf67-4701-92f0-515aefaddb1a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c172ec63-bf67-4701-92f0-515aefaddb1a] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.011798771s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-294000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.64.6
addons_test.go:271: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons disable ingress-dns --alsologtostderr -v=1: (8.54996007s)
addons_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-294000 addons disable ingress --alsologtostderr -v=1: (7.22706625s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (34.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-824000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0127 19:43:11.793399    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:11.798518    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:11.810484    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:11.831434    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:11.872507    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:11.991679    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:12.153720    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:12.475798    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:13.116254    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:14.396656    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:14.811975    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:43:16.956827    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:22.077828    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:43:32.318417    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-824000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (54.071341473s)
--- PASS: TestJSONOutput/start/Command (54.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-824000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-824000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-824000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-824000 --output=json --user=testUser: (8.15803015s)
--- PASS: TestJSONOutput/stop/Command (8.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.75s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-302000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-302000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (356.610024ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e982a95b-ecdd-4eac-adc2-9b16ff52a817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302000] minikube v1.28.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5e39723b-ee60-4e84-a099-60d3b0e023d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"c563b1e3-91dc-4afd-831e-f7fd6e842f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig"}}
	{"specversion":"1.0","id":"cea125b3-cbc3-4422-b4eb-bd86c18ea102","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"e69958c5-a8e3-45c9-a531-1ee0d9d90002","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed6ca415-e721-44ed-949d-4b1cb5404681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube"}}
	{"specversion":"1.0","id":"7eaadb86-4c48-418d-a318-6d80db010510","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"be8f8805-f9b2-468e-b8f7-4f5673385635","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-302000
E0127 19:43:52.798372    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
--- PASS: TestErrorJSONOutput (0.75s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (92.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-230000 --driver=hyperkit 
E0127 19:44:33.758405    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-230000 --driver=hyperkit : (41.444402646s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-232000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-232000 --driver=hyperkit : (41.33046975s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-230000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-232000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-232000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-232000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-232000: (3.441599759s)
helpers_test.go:175: Cleaning up "first-230000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-230000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-230000: (5.271414376s)
--- PASS: TestMinikubeProfile (92.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-604000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-604000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (14.039912726s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-604000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-604000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (14.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-616000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-616000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (13.950312448s)
--- PASS: TestMountStart/serial/StartWithMountSecond (14.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- ls /minikube-host
E0127 19:45:55.678790    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-604000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-604000 --alsologtostderr -v=5: (2.382438962s)
--- PASS: TestMountStart/serial/DeleteFirst (2.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-616000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-616000: (2.222986308s)
--- PASS: TestMountStart/serial/Stop (2.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (16.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-616000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-616000: (15.585259893s)
--- PASS: TestMountStart/serial/RestartStopped (16.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-616000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-556000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0127 19:47:08.624941    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.630618    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.640904    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.661836    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.702007    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.782170    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:08.944089    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:09.264487    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:09.904610    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:11.184751    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:13.746183    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:18.867178    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:29.108741    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:47:47.120319    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 19:47:49.590625    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-556000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m39.807052098s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-556000 -- rollout status deployment/busybox: (3.28497912s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-8mxr9 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-jv64l -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-8mxr9 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-jv64l -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-8mxr9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-jv64l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-8mxr9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-8mxr9 -- sh -c "ping -c 1 192.168.64.1"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-jv64l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-556000 -- exec busybox-6b86dd6d48-jv64l -- sh -c "ping -c 1 192.168.64.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (36.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-556000 -v 3 --alsologtostderr
E0127 19:48:11.789969    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 19:48:30.550713    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:48:39.517128    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-556000 -v 3 --alsologtostderr: (36.432616372s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (36.74s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp testdata/cp-test.txt multinode-556000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1014029289/001/cp-test_multinode-556000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000:/home/docker/cp-test.txt multinode-556000-m02:/home/docker/cp-test_multinode-556000_multinode-556000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test_multinode-556000_multinode-556000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000:/home/docker/cp-test.txt multinode-556000-m03:/home/docker/cp-test_multinode-556000_multinode-556000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test_multinode-556000_multinode-556000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp testdata/cp-test.txt multinode-556000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1014029289/001/cp-test_multinode-556000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m02:/home/docker/cp-test.txt multinode-556000:/home/docker/cp-test_multinode-556000-m02_multinode-556000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test_multinode-556000-m02_multinode-556000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m02:/home/docker/cp-test.txt multinode-556000-m03:/home/docker/cp-test_multinode-556000-m02_multinode-556000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test_multinode-556000-m02_multinode-556000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp testdata/cp-test.txt multinode-556000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile1014029289/001/cp-test_multinode-556000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m03:/home/docker/cp-test.txt multinode-556000:/home/docker/cp-test_multinode-556000-m03_multinode-556000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000 "sudo cat /home/docker/cp-test_multinode-556000-m03_multinode-556000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 cp multinode-556000-m03:/home/docker/cp-test.txt multinode-556000-m02:/home/docker/cp-test_multinode-556000-m03_multinode-556000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 ssh -n multinode-556000-m02 "sudo cat /home/docker/cp-test_multinode-556000-m03_multinode-556000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-556000 node stop m03: (2.1912829s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-556000 status: exit status 7 (249.89507ms)

                                                
                                                
-- stdout --
	multinode-556000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-556000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-556000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr: exit status 7 (252.839803ms)

                                                
                                                
-- stdout --
	multinode-556000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-556000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-556000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:48:50.956745    7705 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:48:50.956905    7705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:48:50.956910    7705 out.go:309] Setting ErrFile to fd 2...
	I0127 19:48:50.956914    7705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:48:50.957026    7705 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 19:48:50.957218    7705 out.go:303] Setting JSON to false
	I0127 19:48:50.957241    7705 mustload.go:65] Loading cluster: multinode-556000
	I0127 19:48:50.957291    7705 notify.go:220] Checking for updates...
	I0127 19:48:50.957508    7705 config.go:180] Loaded profile config "multinode-556000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:48:50.957522    7705 status.go:255] checking status of multinode-556000 ...
	I0127 19:48:50.957855    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:50.957909    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:50.964488    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51479
	I0127 19:48:50.964815    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:50.965209    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:50.965224    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:50.965424    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:50.965527    7705 main.go:141] libmachine: (multinode-556000) Calling .GetState
	I0127 19:48:50.965602    7705 main.go:141] libmachine: (multinode-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 19:48:50.965668    7705 main.go:141] libmachine: (multinode-556000) DBG | hyperkit pid from json: 7302
	I0127 19:48:50.966752    7705 status.go:330] multinode-556000 host status = "Running" (err=<nil>)
	I0127 19:48:50.966771    7705 host.go:66] Checking if "multinode-556000" exists ...
	I0127 19:48:50.967004    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:50.967029    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:50.973860    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51481
	I0127 19:48:50.974235    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:50.974559    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:50.974569    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:50.974792    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:50.974895    7705 main.go:141] libmachine: (multinode-556000) Calling .GetIP
	I0127 19:48:50.974977    7705 host.go:66] Checking if "multinode-556000" exists ...
	I0127 19:48:50.975235    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:50.975256    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:50.986756    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51483
	I0127 19:48:50.987134    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:50.987505    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:50.987523    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:50.987741    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:50.987854    7705 main.go:141] libmachine: (multinode-556000) Calling .DriverName
	I0127 19:48:50.987976    7705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 19:48:50.987997    7705 main.go:141] libmachine: (multinode-556000) Calling .GetSSHHostname
	I0127 19:48:50.988074    7705 main.go:141] libmachine: (multinode-556000) Calling .GetSSHPort
	I0127 19:48:50.988165    7705 main.go:141] libmachine: (multinode-556000) Calling .GetSSHKeyPath
	I0127 19:48:50.988245    7705 main.go:141] libmachine: (multinode-556000) Calling .GetSSHUsername
	I0127 19:48:50.988323    7705 sshutil.go:53] new ssh client: &{IP:192.168.64.12 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/multinode-556000/id_rsa Username:docker}
	I0127 19:48:51.033378    7705 ssh_runner.go:195] Run: systemctl --version
	I0127 19:48:51.036765    7705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 19:48:51.045221    7705 kubeconfig.go:92] found "multinode-556000" server: "https://192.168.64.12:8443"
	I0127 19:48:51.045240    7705 api_server.go:165] Checking apiserver status ...
	I0127 19:48:51.045277    7705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 19:48:51.053153    7705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1847/cgroup
	I0127 19:48:51.058789    7705 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/pod0ae5d87f4180feec672b7827ac983f3c/01c31d59d9152dec970f1b450f55b6f1e6c7e19cfc921a5a4dbeeb7fe743f380"
	I0127 19:48:51.058828    7705 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod0ae5d87f4180feec672b7827ac983f3c/01c31d59d9152dec970f1b450f55b6f1e6c7e19cfc921a5a4dbeeb7fe743f380/freezer.state
	I0127 19:48:51.064568    7705 api_server.go:203] freezer state: "THAWED"
	I0127 19:48:51.064584    7705 api_server.go:252] Checking apiserver healthz at https://192.168.64.12:8443/healthz ...
	I0127 19:48:51.067900    7705 api_server.go:278] https://192.168.64.12:8443/healthz returned 200:
	ok
	I0127 19:48:51.067910    7705 status.go:421] multinode-556000 apiserver status = Running (err=<nil>)
	I0127 19:48:51.067918    7705 status.go:257] multinode-556000 status: &{Name:multinode-556000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:48:51.067929    7705 status.go:255] checking status of multinode-556000-m02 ...
	I0127 19:48:51.068186    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:51.068206    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:51.075169    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51487
	I0127 19:48:51.075529    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:51.075841    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:51.075850    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:51.076061    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:51.076164    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetState
	I0127 19:48:51.076240    7705 main.go:141] libmachine: (multinode-556000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 19:48:51.076315    7705 main.go:141] libmachine: (multinode-556000-m02) DBG | hyperkit pid from json: 7373
	I0127 19:48:51.077385    7705 status.go:330] multinode-556000-m02 host status = "Running" (err=<nil>)
	I0127 19:48:51.077394    7705 host.go:66] Checking if "multinode-556000-m02" exists ...
	I0127 19:48:51.077647    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:51.077671    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:51.084450    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51489
	I0127 19:48:51.084814    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:51.085122    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:51.085139    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:51.085360    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:51.085462    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetIP
	I0127 19:48:51.085536    7705 host.go:66] Checking if "multinode-556000-m02" exists ...
	I0127 19:48:51.085811    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:51.085834    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:51.092660    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51491
	I0127 19:48:51.093022    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:51.093352    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:51.093369    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:51.093558    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:51.093657    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .DriverName
	I0127 19:48:51.093774    7705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 19:48:51.093785    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetSSHHostname
	I0127 19:48:51.093855    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetSSHPort
	I0127 19:48:51.093927    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetSSHKeyPath
	I0127 19:48:51.093995    7705 main.go:141] libmachine: (multinode-556000-m02) Calling .GetSSHUsername
	I0127 19:48:51.094074    7705 sshutil.go:53] new ssh client: &{IP:192.168.64.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3235/.minikube/machines/multinode-556000-m02/id_rsa Username:docker}
	I0127 19:48:51.134140    7705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 19:48:51.143219    7705 status.go:257] multinode-556000-m02 status: &{Name:multinode-556000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:48:51.143235    7705 status.go:255] checking status of multinode-556000-m03 ...
	I0127 19:48:51.143538    7705 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:48:51.143561    7705 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:48:51.150477    7705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51494
	I0127 19:48:51.150867    7705 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:48:51.151227    7705 main.go:141] libmachine: Using API Version  1
	I0127 19:48:51.151243    7705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:48:51.151430    7705 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:48:51.151528    7705 main.go:141] libmachine: (multinode-556000-m03) Calling .GetState
	I0127 19:48:51.151608    7705 main.go:141] libmachine: (multinode-556000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 19:48:51.151690    7705 main.go:141] libmachine: (multinode-556000-m03) DBG | hyperkit pid from json: 7466
	I0127 19:48:51.152731    7705 main.go:141] libmachine: (multinode-556000-m03) DBG | hyperkit pid 7466 missing from process table
	I0127 19:48:51.152778    7705 status.go:330] multinode-556000-m03 host status = "Stopped" (err=<nil>)
	I0127 19:48:51.152787    7705 status.go:343] host is not running, skipping remaining checks
	I0127 19:48:51.152792    7705 status.go:257] multinode-556000-m03 status: &{Name:multinode-556000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.69s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-556000 node start m03 --alsologtostderr: (29.379233351s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (127.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-556000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-556000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-556000: (18.390185382s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-556000 --wait=true -v=8 --alsologtostderr
E0127 19:49:52.470298    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-556000 --wait=true -v=8 --alsologtostderr: (1m49.348219623s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-556000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (127.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-556000 node delete m03: (2.66873003s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-556000 stop: (16.342149148s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-556000 status: exit status 7 (74.859986ms)

                                                
                                                
-- stdout --
	multinode-556000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-556000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr: exit status 7 (74.965492ms)

                                                
                                                
-- stdout --
	multinode-556000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-556000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 19:51:48.233283    7985 out.go:296] Setting OutFile to fd 1 ...
	I0127 19:51:48.233498    7985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:51:48.233503    7985 out.go:309] Setting ErrFile to fd 2...
	I0127 19:51:48.233507    7985 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0127 19:51:48.233614    7985 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3235/.minikube/bin
	I0127 19:51:48.233794    7985 out.go:303] Setting JSON to false
	I0127 19:51:48.233818    7985 mustload.go:65] Loading cluster: multinode-556000
	I0127 19:51:48.233861    7985 notify.go:220] Checking for updates...
	I0127 19:51:48.234118    7985 config.go:180] Loaded profile config "multinode-556000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0127 19:51:48.234130    7985 status.go:255] checking status of multinode-556000 ...
	I0127 19:51:48.234521    7985 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:51:48.234571    7985 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:51:48.241277    7985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51710
	I0127 19:51:48.241639    7985 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:51:48.242021    7985 main.go:141] libmachine: Using API Version  1
	I0127 19:51:48.242036    7985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:51:48.242233    7985 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:51:48.242321    7985 main.go:141] libmachine: (multinode-556000) Calling .GetState
	I0127 19:51:48.242399    7985 main.go:141] libmachine: (multinode-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 19:51:48.242465    7985 main.go:141] libmachine: (multinode-556000) DBG | hyperkit pid from json: 7812
	I0127 19:51:48.243257    7985 main.go:141] libmachine: (multinode-556000) DBG | hyperkit pid 7812 missing from process table
	I0127 19:51:48.243278    7985 status.go:330] multinode-556000 host status = "Stopped" (err=<nil>)
	I0127 19:51:48.243285    7985 status.go:343] host is not running, skipping remaining checks
	I0127 19:51:48.243290    7985 status.go:257] multinode-556000 status: &{Name:multinode-556000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 19:51:48.243305    7985 status.go:255] checking status of multinode-556000-m02 ...
	I0127 19:51:48.243541    7985 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0127 19:51:48.243560    7985 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0127 19:51:48.250164    7985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51712
	I0127 19:51:48.250493    7985 main.go:141] libmachine: () Calling .GetVersion
	I0127 19:51:48.250845    7985 main.go:141] libmachine: Using API Version  1
	I0127 19:51:48.250876    7985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0127 19:51:48.251060    7985 main.go:141] libmachine: () Calling .GetMachineName
	I0127 19:51:48.251144    7985 main.go:141] libmachine: (multinode-556000-m02) Calling .GetState
	I0127 19:51:48.251233    7985 main.go:141] libmachine: (multinode-556000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0127 19:51:48.251301    7985 main.go:141] libmachine: (multinode-556000-m02) DBG | hyperkit pid from json: 7866
	I0127 19:51:48.252107    7985 main.go:141] libmachine: (multinode-556000-m02) DBG | hyperkit pid 7866 missing from process table
	I0127 19:51:48.252124    7985 status.go:330] multinode-556000-m02 host status = "Stopped" (err=<nil>)
	I0127 19:51:48.252129    7985 status.go:343] host is not running, skipping remaining checks
	I0127 19:51:48.252135    7985 status.go:257] multinode-556000-m02 status: &{Name:multinode-556000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (79.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-556000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0127 19:52:08.623080    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:52:36.310598    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 19:52:47.116486    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-556000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m18.744336131s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-556000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (79.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-556000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-556000-m02 --driver=hyperkit 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-556000-m02 --driver=hyperkit : exit status 14 (356.890142ms)

                                                
                                                
-- stdout --
	* [multinode-556000-m02] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-556000-m02' is duplicated with machine name 'multinode-556000-m02' in profile 'multinode-556000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-556000-m03 --driver=hyperkit 
E0127 19:53:11.785518    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-556000-m03 --driver=hyperkit : (41.200815625s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-556000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-556000: exit status 80 (273.784409ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-556000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-556000-m03 already exists in multinode-556000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-556000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-556000-m03: (3.440778447s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.33s)

                                                
                                    
x
+
TestPreload (194.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0127 19:54:10.165386    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-105000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m42.286498375s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-105000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-105000 -- docker pull gcr.io/k8s-minikube/busybox: (1.716017815s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-105000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-105000: (8.219932393s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-105000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0127 19:57:08.618153    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-105000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m16.7781165s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-105000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-105000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-105000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-105000: (5.270176976s)
--- PASS: TestPreload (194.44s)

                                                
                                    
x
+
TestScheduledStopUnix (112.03s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-264000 --memory=2048 --driver=hyperkit 
E0127 19:57:47.112574    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-264000 --memory=2048 --driver=hyperkit : (40.512172729s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-264000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-264000 -n scheduled-stop-264000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-264000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-264000 --cancel-scheduled
E0127 19:58:11.781245    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-264000 -n scheduled-stop-264000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-264000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-264000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-264000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-264000: exit status 7 (69.729339ms)

                                                
                                                
-- stdout --
	scheduled-stop-264000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-264000 -n scheduled-stop-264000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-264000 -n scheduled-stop-264000: exit status 7 (65.496642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-264000
--- PASS: TestScheduledStopUnix (112.03s)

                                                
                                    
x
+
TestSkaffold (77.25s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2854757170 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-497000 --memory=2600 --driver=hyperkit 
E0127 19:59:34.869293    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-497000 --memory=2600 --driver=hyperkit : (41.712238409s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2854757170 run --minikube-profile skaffold-497000 --kube-context skaffold-497000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe2854757170 run --minikube-profile skaffold-497000 --kube-context skaffold-497000 --status-check=true --port-forward=false --interactive=false: (18.322112606s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-b8649cd7-n56z8" [f0e60d07-0d1c-44e6-9091-ac76f3c04463] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011234903s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-67dc767547-r2zhh" [c31f80e9-273d-41ad-8e44-227842bb7192] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006928648s
helpers_test.go:175: Cleaning up "skaffold-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-497000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-497000: (5.270359069s)
--- PASS: TestSkaffold (77.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4111984071.exe start -p running-upgrade-052000 --memory=2200 --vm-driver=hyperkit 
E0127 20:03:11.760071    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 20:03:31.645411    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.4111984071.exe start -p running-upgrade-052000 --memory=2200 --vm-driver=hyperkit : (1m33.530056707s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-052000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0127 20:05:11.409215    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.414712    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.425971    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.447651    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.487840    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.568416    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:11.729493    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:12.050605    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:12.691119    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:13.972230    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:16.532946    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:05:21.655287    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-052000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m12.984559448s)
helpers_test.go:175: Cleaning up "running-upgrade-052000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-052000

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-052000: (5.391186696s)
--- PASS: TestRunningBinaryUpgrade (172.70s)

                                                
                                    
x
+
TestKubernetesUpgrade (160.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperkit : (1m11.723222982s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-584000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-584000: (2.236651368s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-584000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-584000 status --format={{.Host}}: exit status 7 (66.454022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperkit : (37.901614038s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-584000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperkit : exit status 106 (553.413249ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-584000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-584000
	    minikube start -p kubernetes-upgrade-584000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5840002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-584000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperkit 
E0127 20:07:55.255957    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:08:11.752974    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-584000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=hyperkit : (42.030531771s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-584000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-584000

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-584000: (5.765680166s)
--- PASS: TestKubernetesUpgrade (160.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.91s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2281024148/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2281024148/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2281024148/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2281024148/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.91s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (5.97s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2262583127/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2262583127/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2262583127/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2262583127/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (5.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1235808472.exe start -p stopped-upgrade-266000 --memory=2200 --vm-driver=hyperkit 
E0127 20:06:33.337323    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:07:08.589108    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1235808472.exe start -p stopped-upgrade-266000 --memory=2200 --vm-driver=hyperkit : (1m29.883701678s)
version_upgrade_test.go:200: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1235808472.exe -p stopped-upgrade-266000 stop
version_upgrade_test.go:200: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1235808472.exe -p stopped-upgrade-266000 stop: (8.073599728s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-266000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0127 20:07:47.084382    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-266000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (58.388494195s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-266000

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-266000: (3.374415072s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.37s)

                                                
                                    
x
+
TestPause/serial/Start (57.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-713000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-713000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (57.105292374s)
--- PASS: TestPause/serial/Start (57.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (392.980766ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-182000] minikube v1.28.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3235/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3235/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-182000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-182000 --driver=hyperkit : (49.292253925s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-182000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --driver=hyperkit 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --driver=hyperkit : (14.340307971s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-182000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-182000 status -o json: exit status 2 (155.021316ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-182000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-182000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-182000: (2.444658545s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-713000 --alsologtostderr -v=1 --driver=hyperkit 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-713000 --alsologtostderr -v=1 --driver=hyperkit : (40.909109654s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-182000 --no-kubernetes --driver=hyperkit : (15.522075904s)
--- PASS: TestNoKubernetes/serial/Start (15.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (134.000216ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-182000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-182000: (2.243624403s)
--- PASS: TestNoKubernetes/serial/Stop (2.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-182000 --driver=hyperkit 
E0127 20:10:11.403144    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-182000 --driver=hyperkit : (15.772123399s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (15.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-713000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-713000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-713000 --output=json --layout=cluster: exit status 2 (162.009584ms)

                                                
                                                
-- stdout --
	{"Name":"pause-713000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 16 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-713000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-713000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-713000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.27s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-713000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-713000 --alsologtostderr -v=5: (5.26974461s)
--- PASS: TestPause/serial/DeletePaused (5.27s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-182000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (122.620706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0127 20:10:39.092742    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m12.21497009s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m11.363188735s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xbl5q" [fd0ce4e5-b46e-47db-a993-3964e67f294d] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012338582s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bjm86" [449e731d-d4fb-4d8c-b6fc-aae6faf01e2d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-bjm86" [449e731d-d4fb-4d8c-b6fc-aae6faf01e2d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.006123545s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5dkcg" [073b65cf-f968-4713-ae63-b2a0d2b2e2c8] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015303451s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m7.628234022s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dzlzq" [7d22fdc8-7147-4c74-b920-1b4a6ec71526] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dzlzq" [7d22fdc8-7147-4c74-b920-1b4a6ec71526] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.006138358s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (68.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0127 20:13:11.746093    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m8.844258788s)
--- PASS: TestNetworkPlugins/group/false/Start (68.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (19.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-f9lbf" [f051d44f-8815-4a9e-970b-2c017fb04a7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-f9lbf" [f051d44f-8815-4a9e-970b-2c017fb04a7c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 19.005336696s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (19.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (15.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7b72c" [4f5b9588-fbc5-4a8a-ad1f-a22947a73dec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/NetCatPod
helpers_test.go:344: "netcat-694fc96674-7b72c" [4f5b9588-fbc5-4a8a-ad1f-a22947a73dec] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 15.004416963s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (15.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (55.605144761s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m0.633140629s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-7dqrd" [91af127d-fd35-41e2-a1b1-00e5e3182e83] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-7dqrd" [91af127d-fd35-41e2-a1b1-00e5e3182e83] Running
E0127 20:15:11.395764    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.005822811s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m2.588445672s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tfgg9" [e759f85e-516f-4773-b97d-d2f960cda230] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.011635966s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-dsbqx" [0bd03d2a-1248-452f-86f9-01ddb5b6ba15] Pending
helpers_test.go:344: "netcat-694fc96674-dsbqx" [0bd03d2a-1248-452f-86f9-01ddb5b6ba15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-dsbqx" [0bd03d2a-1248-452f-86f9-01ddb5b6ba15] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.007065396s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-035000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (54.897331704s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-h7lfr" [b03511ae-d8d5-4a14-bbbf-d5422a7bd96c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 20:16:41.764305    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:41.770353    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:41.781706    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:41.801779    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:41.907721    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:41.989076    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:42.149651    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:42.469770    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:43.110517    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:16:44.392207    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-h7lfr" [b03511ae-d8d5-4a14-bbbf-d5422a7bd96c] Running
E0127 20:16:46.952383    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.005614385s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0
E0127 20:17:08.575052    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-159000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.16.0: (2m20.2266706s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-035000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-035000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vvh9n" [8a1b21f3-f324-41c7-a743-9059a7994a4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0127 20:17:14.253655    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.259270    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.269312    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.289428    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.329771    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.409927    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.571943    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:14.892061    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:15.532383    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:16.813916    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:17:19.419963    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-vvh9n" [8a1b21f3-f324-41c7-a743-9059a7994a4b] Running
E0127 20:17:22.792333    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:17:24.540080    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.004302554s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-035000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-035000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)
E0127 20:28:11.747308    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 20:28:26.671415    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:29:02.074643    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:29:23.140385    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:29:28.490862    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:29:50.833188    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:29:56.186083    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:29:59.816675    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (99.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:17:47.070447    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 20:17:55.260062    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:18:03.751765    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:18:11.738425    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 20:18:26.662482    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:26.667901    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:26.678095    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:26.698317    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:26.738559    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:26.818692    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:27.033004    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:27.353079    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:27.994200    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:29.274407    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:31.835525    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:36.219611    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:18:36.977702    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:18:47.218603    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:19:02.066810    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.072240    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.083172    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.103601    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.145470    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.226940    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.389375    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:02.712001    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:03.355004    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:04.637551    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:07.200423    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:07.706602    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:19:12.324857    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:19:22.570215    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-272000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.26.1: (1m39.762463541s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (99.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-272000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15552ca3-42b0-49d0-b20b-e32754f35978] Pending
helpers_test.go:344: "busybox" [15552ca3-42b0-49d0-b20b-e32754f35978] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 20:19:25.688363    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [15552ca3-42b0-49d0-b20b-e32754f35978] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.010774789s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-272000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-159000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [41ab63ba-da69-4005-bcf7-2b863775d81a] Pending
helpers_test.go:344: "busybox" [41ab63ba-da69-4005-bcf7-2b863775d81a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:344: "busybox" [41ab63ba-da69-4005-bcf7-2b863775d81a] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.014451537s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-159000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-272000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-272000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-272000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-272000 --alsologtostderr -v=3: (8.285867825s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-159000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-159000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-159000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-159000 --alsologtostderr -v=3: (8.246174706s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-272000 -n no-preload-272000: exit status 7 (67.173235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-272000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-159000 -n old-k8s-version-159000: exit status 7 (66.247993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-159000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-071000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:21:34.459731    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:21:35.450260    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.455435    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.465662    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.486003    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.526449    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.606670    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:35.767448    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:36.087530    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:36.728234    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:38.008397    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:40.568481    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:41.779979    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:21:45.689799    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:21:45.936405    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:21:55.930281    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-071000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.1: (54.022246623s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:22:08.590939    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 20:22:09.530341    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:22:10.433565    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.439251    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.449728    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.470669    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.511206    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.591322    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:10.751986    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:11.073697    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:11.714287    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:12.995452    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:14.270589    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:22:15.557603    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:22:16.410357    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.1: (50.275186099s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4cef2af6-2405-484e-8849-2a9c1b310710] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4cef2af6-2405-484e-8849-2a9c1b310710] Running
E0127 20:22:20.677745    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014046934s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-071000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-071000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-071000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-071000 --alsologtostderr -v=3
E0127 20:22:30.919450    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-071000 --alsologtostderr -v=3: (8.273924705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000: exit status 7 (67.099342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-071000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-071000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:22:41.997104    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:22:43.677472    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:22:47.087342    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory
E0127 20:22:51.399269    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-071000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.26.1: (5m0.614310757s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-620000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-620000 --alsologtostderr -v=3
E0127 20:22:57.370997    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-620000 --alsologtostderr -v=3: (8.300063539s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000: exit status 7 (68.861918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-620000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:23:11.754721    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/functional-093000/client.crt: no such file or directory
E0127 20:23:19.456727    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:23:26.678698    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:23:32.358962    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-620000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.26.1: (38.175577641s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-620000 -n newest-cni-620000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-620000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-620000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-620000 -n newest-cni-620000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-620000 -n newest-cni-620000: exit status 2 (164.169703ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-620000 -n newest-cni-620000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-620000 -n newest-cni-620000: exit status 2 (156.454588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-620000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-620000 -n newest-cni-620000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-620000 -n newest-cni-620000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-950000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:23:54.436953    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/custom-flannel-035000/client.crt: no such file or directory
E0127 20:24:02.081579    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:24:19.290021    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:24:23.148191    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.154101    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.166347    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.188526    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.230085    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.310417    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.471820    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:23.793535    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:24.434556    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:25.714960    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:28.275024    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:28.499371    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.504737    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.516072    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.538215    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.579975    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.660909    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:28.821304    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:29.142316    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:29.773054    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/false-035000/client.crt: no such file or directory
E0127 20:24:29.782943    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:31.063250    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:33.396216    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:24:33.624405    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:24:38.746227    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-950000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.1: (54.392882002s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-950000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [09701fed-c2c9-4ee7-a496-dbfeec5e93a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0127 20:24:43.636492    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [09701fed-c2c9-4ee7-a496-dbfeec5e93a6] Running
E0127 20:24:48.986910    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.013126488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-950000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-950000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-950000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-950000 --alsologtostderr -v=3
E0127 20:24:54.277608    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:24:59.822567    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-950000 --alsologtostderr -v=3: (8.295095109s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-950000 -n embed-certs-950000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-950000 -n embed-certs-950000: exit status 7 (66.538708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-950000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-950000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.1
E0127 20:25:04.116390    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:25:09.468024    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:25:11.404038    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
E0127 20:25:27.515019    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/enable-default-cni-035000/client.crt: no such file or directory
E0127 20:25:35.600448    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:25:45.075929    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:25:50.429268    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:26:03.293580    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/flannel-035000/client.crt: no such file or directory
E0127 20:26:35.443727    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:26:41.772701    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kindnet-035000/client.crt: no such file or directory
E0127 20:27:03.127143    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/bridge-035000/client.crt: no such file or directory
E0127 20:27:06.994393    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/no-preload-272000/client.crt: no such file or directory
E0127 20:27:08.584193    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/ingress-addon-legacy-294000/client.crt: no such file or directory
E0127 20:27:10.427047    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
E0127 20:27:12.349508    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/old-k8s-version-159000/client.crt: no such file or directory
E0127 20:27:14.263127    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/calico-035000/client.crt: no such file or directory
E0127 20:27:30.132522    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/addons-113000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-950000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.26.1: (4m57.979652243s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-950000 -n embed-certs-950000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pvn96" [742b4201-6246-4fd6-8361-ad451893d9fb] Running
E0127 20:27:38.114278    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/kubenet-035000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010979857s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pvn96" [742b4201-6246-4fd6-8361-ad451893d9fb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008529084s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-071000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-071000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-071000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000: exit status 2 (164.665572ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000: exit status 2 (155.317964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-071000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071000 -n default-k8s-diff-port-071000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fswfl" [97e509a1-875c-4671-a505-831132539d92] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008887855s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fswfl" [97e509a1-875c-4671-a505-831132539d92] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006042083s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-950000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-950000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-950000 --alsologtostderr -v=1
E0127 20:30:11.397986    4442 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3235/.minikube/profiles/skaffold-497000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-950000 -n embed-certs-950000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-950000 -n embed-certs-950000: exit status 2 (148.628605ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-950000 -n embed-certs-950000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-950000 -n embed-certs-950000: exit status 2 (173.089902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-950000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-950000 -n embed-certs-950000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-950000 -n embed-certs-950000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.83s)

                                                
                                    

Test skip (18/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-035000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-035000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-035000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-035000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-035000"

                                                
                                                
----------------------- debugLogs end: cilium-035000 [took: 5.41212943s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-035000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-035000
--- SKIP: TestNetworkPlugins/group/cilium (5.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-647000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.41s)

                                                
                                    
Copied to clipboard