=== RUN TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1967460272.exe start -p running-upgrade-288000 --memory=2200 --vm-driver=hyperkit
version_upgrade_test.go:133: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.1967460272.exe start -p running-upgrade-288000 --memory=2200 --vm-driver=hyperkit : (1m33.390487465s)
version_upgrade_test.go:143: (dbg) Run: out/minikube-darwin-amd64 start -p running-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (14.83397876s)
-- stdout --
* [running-upgrade-288000] minikube v1.31.2 on Darwin 13.5.2
- MINIKUBE_LOCATION=17243
- KUBECONFIG=/Users/jenkins/minikube-integration/17243-979/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-979/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
* Using the hyperkit driver based on existing profile
* Starting control plane node running-upgrade-288000 in cluster running-upgrade-288000
* Updating the running hyperkit "running-upgrade-288000" VM ...
-- /stdout --
** stderr **
I0914 15:10:41.944317 5146 out.go:296] Setting OutFile to fd 1 ...
I0914 15:10:41.944552 5146 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:10:41.944557 5146 out.go:309] Setting ErrFile to fd 2...
I0914 15:10:41.944561 5146 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0914 15:10:41.944730 5146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17243-979/.minikube/bin
I0914 15:10:41.946168 5146 out.go:303] Setting JSON to false
I0914 15:10:41.965804 5146 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2409,"bootTime":1694727032,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0914 15:10:41.965892 5146 start.go:136] gopshost.Virtualization returned error: not implemented yet
I0914 15:10:41.987233 5146 out.go:177] * [running-upgrade-288000] minikube v1.31.2 on Darwin 13.5.2
I0914 15:10:42.031017 5146 out.go:177] - MINIKUBE_LOCATION=17243
I0914 15:10:42.031085 5146 notify.go:220] Checking for updates...
I0914 15:10:42.052352 5146 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17243-979/kubeconfig
I0914 15:10:42.074250 5146 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0914 15:10:42.094975 5146 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0914 15:10:42.137124 5146 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17243-979/.minikube
I0914 15:10:42.157950 5146 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0914 15:10:42.179803 5146 config.go:182] Loaded profile config "running-upgrade-288000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0914 15:10:42.179841 5146 start_flags.go:686] config upgrade: Driver=hyperkit
I0914 15:10:42.179855 5146 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503
I0914 15:10:42.179981 5146 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-979/.minikube/profiles/running-upgrade-288000/config.json ...
I0914 15:10:42.181307 5146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0914 15:10:42.181365 5146 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0914 15:10:42.188882 5146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52550
I0914 15:10:42.189248 5146 main.go:141] libmachine: () Calling .GetVersion
I0914 15:10:42.189690 5146 main.go:141] libmachine: Using API Version 1
I0914 15:10:42.189700 5146 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 15:10:42.189907 5146 main.go:141] libmachine: () Calling .GetMachineName
I0914 15:10:42.190013 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:42.211209 5146 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
I0914 15:10:42.232029 5146 driver.go:373] Setting default libvirt URI to qemu:///system
I0914 15:10:42.232577 5146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0914 15:10:42.232640 5146 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0914 15:10:42.240604 5146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52552
I0914 15:10:42.240934 5146 main.go:141] libmachine: () Calling .GetVersion
I0914 15:10:42.241285 5146 main.go:141] libmachine: Using API Version 1
I0914 15:10:42.241307 5146 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 15:10:42.241526 5146 main.go:141] libmachine: () Calling .GetMachineName
I0914 15:10:42.241640 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:42.290199 5146 out.go:177] * Using the hyperkit driver based on existing profile
I0914 15:10:42.326888 5146 start.go:298] selected driver: hyperkit
I0914 15:10:42.326908 5146 start.go:902] validating driver "hyperkit" against &{Name:running-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
I0914 15:10:42.327090 5146 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0914 15:10:42.330969 5146 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.331073 5146 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17243-979/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0914 15:10:42.337920 5146 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
I0914 15:10:42.341310 5146 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0914 15:10:42.341344 5146 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0914 15:10:42.341418 5146 cni.go:84] Creating CNI manager for ""
I0914 15:10:42.341439 5146 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0914 15:10:42.341448 5146 start_flags.go:321] config:
{Name:running-upgrade-288000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694625416-17243@sha256:87a683cf6721050a43e629eceb07cbff2775f9ca392344a264b61b7da435e503 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
I0914 15:10:42.341646 5146 iso.go:125] acquiring lock: {Name:mkb0b7254efe5d6c8057c6ee6e666676be69af44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.384124 5146 out.go:177] * Starting control plane node running-upgrade-288000 in cluster running-upgrade-288000
I0914 15:10:42.404924 5146 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W0914 15:10:42.521664 5146 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0914 15:10:42.521799 5146 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17243-979/.minikube/profiles/running-upgrade-288000/config.json ...
I0914 15:10:42.521948 5146 cache.go:107] acquiring lock: {Name:mk12572a23b9196edce0e8b1d96d62d2a08cfbae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.521950 5146 cache.go:107] acquiring lock: {Name:mkae0dd66516273123af979427e829d7296b8804 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.521993 5146 cache.go:107] acquiring lock: {Name:mka0e7703311d29657d2c0077799fe70c61066dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522145 5146 cache.go:115] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0914 15:10:42.522177 5146 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 223.875µs
I0914 15:10:42.522205 5146 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0914 15:10:42.522201 5146 cache.go:107] acquiring lock: {Name:mk169738f484b26dc6e4a2784fe009c530494eb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522283 5146 cache.go:107] acquiring lock: {Name:mk8320e5caea151c7931462229c6d1c15905eb13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522304 5146 cache.go:107] acquiring lock: {Name:mk5ea53c37c437dc942b9148c9872f08108150c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522367 5146 cache.go:107] acquiring lock: {Name:mk72e477000dcf2cbc3e3bfa0eea98c64113e19b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522402 5146 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
I0914 15:10:42.522382 5146 cache.go:107] acquiring lock: {Name:mk3b26d0b5de11f24fcc5b9bc764430a7efaa372 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0914 15:10:42.522486 5146 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I0914 15:10:42.522503 5146 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
I0914 15:10:42.522739 5146 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
I0914 15:10:42.522853 5146 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0914 15:10:42.522890 5146 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
I0914 15:10:42.522905 5146 start.go:365] acquiring machines lock for running-upgrade-288000: {Name:mk50fc030ea3f9c3d1679ad2d232a5102cf783c1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0914 15:10:42.523014 5146 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
I0914 15:10:42.523031 5146 start.go:369] acquired machines lock for "running-upgrade-288000" in 102.862µs
I0914 15:10:42.523088 5146 start.go:96] Skipping create...Using existing machine configuration
I0914 15:10:42.523110 5146 fix.go:54] fixHost starting: minikube
I0914 15:10:42.523733 5146 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0914 15:10:42.523772 5146 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0914 15:10:42.529780 5146 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
I0914 15:10:42.529803 5146 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
I0914 15:10:42.529809 5146 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
I0914 15:10:42.529784 5146 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0914 15:10:42.530822 5146 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
I0914 15:10:42.531092 5146 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
I0914 15:10:42.531187 5146 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I0914 15:10:42.534061 5146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52554
I0914 15:10:42.534393 5146 main.go:141] libmachine: () Calling .GetVersion
I0914 15:10:42.534756 5146 main.go:141] libmachine: Using API Version 1
I0914 15:10:42.534774 5146 main.go:141] libmachine: () Calling .SetConfigRaw
I0914 15:10:42.535039 5146 main.go:141] libmachine: () Calling .GetMachineName
I0914 15:10:42.535169 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:42.535268 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetState
I0914 15:10:42.535364 5146 main.go:141] libmachine: (running-upgrade-288000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0914 15:10:42.535447 5146 main.go:141] libmachine: (running-upgrade-288000) DBG | hyperkit pid from json: 5044
I0914 15:10:42.536358 5146 fix.go:102] recreateIfNeeded on running-upgrade-288000: state=Running err=<nil>
W0914 15:10:42.536380 5146 fix.go:128] unexpected machine state, will restart: <nil>
I0914 15:10:42.578674 5146 out.go:177] * Updating the running hyperkit "running-upgrade-288000" VM ...
I0914 15:10:42.599521 5146 machine.go:88] provisioning docker machine ...
I0914 15:10:42.599544 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:42.599769 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetMachineName
I0914 15:10:42.599890 5146 buildroot.go:166] provisioning hostname "running-upgrade-288000"
I0914 15:10:42.599901 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetMachineName
I0914 15:10:42.600002 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.600100 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:42.600201 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.600306 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.600390 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:42.600518 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:42.600824 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:42.600833 5146 main.go:141] libmachine: About to run SSH command:
sudo hostname running-upgrade-288000 && echo "running-upgrade-288000" | sudo tee /etc/hostname
I0914 15:10:42.658945 5146 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-288000
I0914 15:10:42.658964 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.659118 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:42.659220 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.659326 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.659428 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:42.659551 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:42.659803 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:42.659815 5146 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-288000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-288000/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-288000' | sudo tee -a /etc/hosts;
fi
fi
I0914 15:10:42.712492 5146 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0914 15:10:42.712521 5146 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17243-979/.minikube CaCertPath:/Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17243-979/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17243-979/.minikube}
I0914 15:10:42.712547 5146 buildroot.go:174] setting up certificates
I0914 15:10:42.712571 5146 provision.go:83] configureAuth start
I0914 15:10:42.712581 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetMachineName
I0914 15:10:42.712745 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetIP
I0914 15:10:42.712859 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.712984 5146 provision.go:138] copyHostCerts
I0914 15:10:42.713061 5146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-979/.minikube/key.pem, removing ...
I0914 15:10:42.713072 5146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-979/.minikube/key.pem
I0914 15:10:42.713197 5146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-979/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17243-979/.minikube/key.pem (1679 bytes)
I0914 15:10:42.713412 5146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-979/.minikube/ca.pem, removing ...
I0914 15:10:42.713418 5146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-979/.minikube/ca.pem
I0914 15:10:42.713492 5146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17243-979/.minikube/ca.pem (1078 bytes)
I0914 15:10:42.713677 5146 exec_runner.go:144] found /Users/jenkins/minikube-integration/17243-979/.minikube/cert.pem, removing ...
I0914 15:10:42.713684 5146 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17243-979/.minikube/cert.pem
I0914 15:10:42.713762 5146 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17243-979/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17243-979/.minikube/cert.pem (1123 bytes)
I0914 15:10:42.713908 5146 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17243-979/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-288000 san=[192.168.64.25 192.168.64.25 localhost 127.0.0.1 minikube running-upgrade-288000]
I0914 15:10:42.828195 5146 provision.go:172] copyRemoteCerts
I0914 15:10:42.828262 5146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0914 15:10:42.828286 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.828468 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:42.828568 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.828671 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:42.828751 5146 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/running-upgrade-288000/id_rsa Username:docker}
I0914 15:10:42.858673 5146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-979/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0914 15:10:42.868512 5146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-979/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0914 15:10:42.878363 5146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-979/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0914 15:10:42.888574 5146 provision.go:86] duration metric: configureAuth took 175.995232ms
I0914 15:10:42.888588 5146 buildroot.go:189] setting minikube options for container-runtime
I0914 15:10:42.888705 5146 config.go:182] Loaded profile config "running-upgrade-288000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0914 15:10:42.888719 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:42.888860 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.888949 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:42.889042 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.889138 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.889218 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:42.889326 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:42.889572 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:42.889581 5146 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0914 15:10:42.946211 5146 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0914 15:10:42.946229 5146 buildroot.go:70] root file system type: tmpfs
I0914 15:10:42.946310 5146 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0914 15:10:42.946329 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:42.946464 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:42.946556 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.946684 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:42.946787 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:42.946913 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:42.947159 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:42.947210 5146 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0914 15:10:43.006397 5146 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0914 15:10:43.006439 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:43.006584 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:43.006665 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:43.006792 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:43.006883 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:43.007000 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:43.007244 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:43.007257 5146 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0914 15:10:43.158502 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
I0914 15:10:43.290175 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
I0914 15:10:43.587835 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
I0914 15:10:43.955661 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0914 15:10:44.213631 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
I0914 15:10:44.496756 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
I0914 15:10:44.786125 5146 cache.go:162] opening: /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
I0914 15:10:44.866153 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
I0914 15:10:44.866170 5146 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 2.343955179s
I0914 15:10:44.866179 5146 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
I0914 15:10:44.931910 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
I0914 15:10:44.931930 5146 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 2.409847426s
I0914 15:10:44.931942 5146 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
I0914 15:10:48.372056 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
I0914 15:10:48.372074 5146 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 5.850006324s
I0914 15:10:48.372083 5146 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
I0914 15:10:49.451508 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
I0914 15:10:49.451532 5146 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 6.929768099s
I0914 15:10:49.451548 5146 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
I0914 15:10:49.658282 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
I0914 15:10:49.658297 5146 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 7.136176484s
I0914 15:10:49.658305 5146 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
I0914 15:10:52.383884 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
I0914 15:10:52.383901 5146 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 9.862177291s
I0914 15:10:52.383909 5146 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
I0914 15:10:54.378822 5146 cache.go:157] /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
I0914 15:10:54.378838 5146 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 11.856911479s
I0914 15:10:54.378846 5146 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17243-979/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
I0914 15:10:54.378879 5146 cache.go:87] Successfully saved all images to host disk.
I0914 15:10:54.841142 5146 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
+++ /lib/systemd/system/docker.service.new
@@ -3,9 +3,12 @@
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
+Restart=on-failure
@@ -21,7 +24,7 @@
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0914 15:10:54.841162 5146 machine.go:91] provisioned docker machine in 12.241928073s
I0914 15:10:54.841174 5146 start.go:300] post-start starting for "running-upgrade-288000" (driver="hyperkit")
I0914 15:10:54.841182 5146 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0914 15:10:54.841198 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:54.841397 5146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0914 15:10:54.841415 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:54.841504 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:54.841598 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:54.841674 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:54.841754 5146 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/running-upgrade-288000/id_rsa Username:docker}
I0914 15:10:54.872999 5146 ssh_runner.go:195] Run: cat /etc/os-release
I0914 15:10:54.875615 5146 info.go:137] Remote host: Buildroot 2019.02.7
I0914 15:10:54.875629 5146 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-979/.minikube/addons for local assets ...
I0914 15:10:54.875716 5146 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17243-979/.minikube/files for local assets ...
I0914 15:10:54.875889 5146 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17243-979/.minikube/files/etc/ssl/certs/14402.pem -> 14402.pem in /etc/ssl/certs
I0914 15:10:54.876072 5146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0914 15:10:54.879941 5146 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17243-979/.minikube/files/etc/ssl/certs/14402.pem --> /etc/ssl/certs/14402.pem (1708 bytes)
I0914 15:10:54.889127 5146 start.go:303] post-start completed in 47.947137ms
I0914 15:10:54.889139 5146 fix.go:56] fixHost completed within 12.366341886s
I0914 15:10:54.889154 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:54.889290 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:54.889393 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:54.889481 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:54.889565 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:54.889691 5146 main.go:141] libmachine: Using SSH client type: native
I0914 15:10:54.889929 5146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f2920] 0x13f5600 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0914 15:10:54.889937 5146 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0914 15:10:54.943249 5146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694729455.169288449
I0914 15:10:54.943265 5146 fix.go:206] guest clock: 1694729455.169288449
I0914 15:10:54.943270 5146 fix.go:219] Guest: 2023-09-14 15:10:55.169288449 -0700 PDT Remote: 2023-09-14 15:10:54.889143 -0700 PDT m=+12.977178028 (delta=280.145449ms)
I0914 15:10:54.943293 5146 fix.go:190] guest clock delta is within tolerance: 280.145449ms
I0914 15:10:54.943298 5146 start.go:83] releasing machines lock for "running-upgrade-288000", held for 12.420557838s
I0914 15:10:54.943319 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:54.943447 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetIP
I0914 15:10:54.943537 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:54.943838 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:54.943941 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .DriverName
I0914 15:10:54.944024 5146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0914 15:10:54.944056 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:54.944070 5146 ssh_runner.go:195] Run: cat /version.json
I0914 15:10:54.944082 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHHostname
I0914 15:10:54.944165 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:54.944169 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHPort
I0914 15:10:54.944279 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:54.944319 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHKeyPath
I0914 15:10:54.944377 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:54.944417 5146 main.go:141] libmachine: (running-upgrade-288000) Calling .GetSSHUsername
I0914 15:10:54.944496 5146 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/running-upgrade-288000/id_rsa Username:docker}
I0914 15:10:54.944510 5146 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17243-979/.minikube/machines/running-upgrade-288000/id_rsa Username:docker}
W0914 15:10:55.022060 5146 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I0914 15:10:55.022141 5146 ssh_runner.go:195] Run: systemctl --version
I0914 15:10:55.025487 5146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0914 15:10:55.029237 5146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0914 15:10:55.029286 5146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0914 15:10:55.032887 5146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0914 15:10:55.036276 5146 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
I0914 15:10:55.036292 5146 start.go:469] detecting cgroup driver to use...
I0914 15:10:55.036421 5146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0914 15:10:55.044297 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I0914 15:10:55.048365 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0914 15:10:55.052300 5146 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0914 15:10:55.052340 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0914 15:10:55.056472 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 15:10:55.060453 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0914 15:10:55.064450 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0914 15:10:55.068406 5146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0914 15:10:55.072834 5146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0914 15:10:55.076803 5146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0914 15:10:55.080274 5146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0914 15:10:55.083772 5146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 15:10:55.142658 5146 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0914 15:10:55.152267 5146 start.go:469] detecting cgroup driver to use...
I0914 15:10:55.152337 5146 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0914 15:10:55.163691 5146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 15:10:55.171258 5146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0914 15:10:55.191106 5146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0914 15:10:55.198352 5146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0914 15:10:55.205339 5146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0914 15:10:55.212851 5146 ssh_runner.go:195] Run: which cri-dockerd
I0914 15:10:55.215025 5146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0914 15:10:55.219029 5146 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0914 15:10:55.225281 5146 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0914 15:10:55.283795 5146 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0914 15:10:55.353946 5146 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
I0914 15:10:55.353963 5146 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0914 15:10:55.360802 5146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0914 15:10:55.425275 5146 ssh_runner.go:195] Run: sudo systemctl restart docker
I0914 15:10:56.588399 5146 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163132512s)
I0914 15:10:56.609767 5146 out.go:177]
W0914 15:10:56.630458 5146 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W0914 15:10:56.630476 5146 out.go:239] *
*
W0914 15:10:56.631338 5146 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0914 15:10:56.714586 5146 out.go:177]
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-288000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-14 15:10:56.748018 -0700 PDT m=+2115.105751172
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-288000 -n running-upgrade-288000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-288000 -n running-upgrade-288000: exit status 6 (121.685157ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0914 15:10:56.855653 5266 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-288000" does not appear in /Users/jenkins/minikube-integration/17243-979/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-288000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-288000" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-darwin-amd64 delete -p running-upgrade-288000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-288000: (1.451487141s)
--- FAIL: TestRunningBinaryUpgrade (111.94s)