=== RUN TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1875133465.exe start -p running-upgrade-961000 --memory=2200 --vm-driver=hyperkit
version_upgrade_test.go:133: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.6.2.1875133465.exe start -p running-upgrade-961000 --memory=2200 --vm-driver=hyperkit : (1m30.002697629s)
version_upgrade_test.go:143: (dbg) Run: out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (15.353090638s)
-- stdout --
* [running-upgrade-961000] minikube v1.31.2 on Darwin 14.0
- MINIKUBE_LOCATION=17491
- KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
* Using the hyperkit driver based on existing profile
* Starting control plane node running-upgrade-961000 in cluster running-upgrade-961000
* Updating the running hyperkit "running-upgrade-961000" VM ...
-- /stdout --
** stderr **
I1025 19:21:10.599491 81271 out.go:296] Setting OutFile to fd 1 ...
I1025 19:21:10.599780 81271 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 19:21:10.599786 81271 out.go:309] Setting ErrFile to fd 2...
I1025 19:21:10.599790 81271 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 19:21:10.599964 81271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17491-76819/.minikube/bin
I1025 19:21:10.601499 81271 out.go:303] Setting JSON to false
I1025 19:21:10.625745 81271 start.go:128] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":37238,"bootTime":1698249632,"procs":500,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.0","kernelVersion":"23.0.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W1025 19:21:10.625849 81271 start.go:136] gopshost.Virtualization returned error: not implemented yet
I1025 19:21:10.647927 81271 out.go:177] * [running-upgrade-961000] minikube v1.31.2 on Darwin 14.0
I1025 19:21:10.743469 81271 out.go:177] - MINIKUBE_LOCATION=17491
I1025 19:21:10.722622 81271 notify.go:220] Checking for updates...
I1025 19:21:10.801327 81271 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/17491-76819/kubeconfig
I1025 19:21:10.859227 81271 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I1025 19:21:10.880367 81271 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1025 19:21:10.901415 81271 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17491-76819/.minikube
I1025 19:21:10.922387 81271 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1025 19:21:10.944001 81271 config.go:182] Loaded profile config "running-upgrade-961000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I1025 19:21:10.944037 81271 start_flags.go:697] config upgrade: Driver=hyperkit
I1025 19:21:10.944050 81271 start_flags.go:709] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
I1025 19:21:10.944165 81271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/running-upgrade-961000/config.json ...
I1025 19:21:10.945341 81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 19:21:10.945407 81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 19:21:10.954405 81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53945
I1025 19:21:10.954773 81271 main.go:141] libmachine: () Calling .GetVersion
I1025 19:21:10.955229 81271 main.go:141] libmachine: Using API Version 1
I1025 19:21:10.955256 81271 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 19:21:10.955504 81271 main.go:141] libmachine: () Calling .GetMachineName
I1025 19:21:10.955606 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:10.976345 81271 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
I1025 19:21:10.997262 81271 driver.go:378] Setting default libvirt URI to qemu:///system
I1025 19:21:10.997704 81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 19:21:10.997748 81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 19:21:11.006994 81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53947
I1025 19:21:11.007353 81271 main.go:141] libmachine: () Calling .GetVersion
I1025 19:21:11.007726 81271 main.go:141] libmachine: Using API Version 1
I1025 19:21:11.007744 81271 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 19:21:11.007950 81271 main.go:141] libmachine: () Calling .GetMachineName
I1025 19:21:11.008057 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:11.057537 81271 out.go:177] * Using the hyperkit driver based on existing profile
I1025 19:21:11.078171 81271 start.go:298] selected driver: hyperkit
I1025 19:21:11.078187 81271 start.go:902] validating driver "hyperkit" against &{Name:running-upgrade-961000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.87.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
I1025 19:21:11.078306 81271 start.go:913] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1025 19:21:11.082210 81271 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.082308 81271 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/17491-76819/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I1025 19:21:11.090056 81271 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.31.2
I1025 19:21:11.094359 81271 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 19:21:11.094383 81271 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I1025 19:21:11.094465 81271 cni.go:84] Creating CNI manager for ""
I1025 19:21:11.094486 81271 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I1025 19:21:11.094496 81271 start_flags.go:323] config:
{Name:running-upgrade-961000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.87.11 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
I1025 19:21:11.094672 81271 iso.go:125] acquiring lock: {Name:mk28dd82d77e5b41d6d5779f6c9eefa1a75d61e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.136341 81271 out.go:177] * Starting control plane node running-upgrade-961000 in cluster running-upgrade-961000
I1025 19:21:11.157216 81271 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W1025 19:21:11.213550 81271 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I1025 19:21:11.213661 81271 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17491-76819/.minikube/profiles/running-upgrade-961000/config.json ...
I1025 19:21:11.213744 81271 cache.go:107] acquiring lock: {Name:mked931b330050a138a73435356c58e13649ef3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213776 81271 cache.go:107] acquiring lock: {Name:mkb29a8422b0fd02310979164accd7236a712951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213769 81271 cache.go:107] acquiring lock: {Name:mk8ef1082aad9c42eb262d52ed78efab5e04fccf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213896 81271 cache.go:115] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1025 19:21:11.213887 81271 cache.go:107] acquiring lock: {Name:mk7eac97b2594b28bb1c298d5deace21a4190401 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213927 81271 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 187.269µs
I1025 19:21:11.213943 81271 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1025 19:21:11.213917 81271 cache.go:107] acquiring lock: {Name:mk31d0ad85400c98cc989d80a128b16f522dca3e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213959 81271 cache.go:107] acquiring lock: {Name:mk067a7af34b5cb1550dd1232822d08d70606ef5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213993 81271 cache.go:107] acquiring lock: {Name:mk49dbc2dc0236a392c1b9dfe260b782a4c19376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.213980 81271 cache.go:107] acquiring lock: {Name:mk93ff27cdba963c9a558d35e6eaabbe5d08abbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 19:21:11.214129 81271 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I1025 19:21:11.214130 81271 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
I1025 19:21:11.214352 81271 start.go:365] acquiring machines lock for running-upgrade-961000: {Name:mk32146e6cf5387e84f7f533a58800680d6b59cf Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1025 19:21:11.214433 81271 start.go:369] acquired machines lock for "running-upgrade-961000" in 64.965µs
I1025 19:21:11.214457 81271 start.go:96] Skipping create...Using existing machine configuration
I1025 19:21:11.214468 81271 fix.go:54] fixHost starting: minikube
I1025 19:21:11.214691 81271 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
I1025 19:21:11.214766 81271 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I1025 19:21:11.214901 81271 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
I1025 19:21:11.214917 81271 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I1025 19:21:11.214922 81271 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
I1025 19:21:11.214959 81271 main.go:141] libmachine: Launching plugin server for driver hyperkit
I1025 19:21:11.215002 81271 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
I1025 19:21:11.222930 81271 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
I1025 19:21:11.223132 81271 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
I1025 19:21:11.223247 81271 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I1025 19:21:11.223301 81271 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
I1025 19:21:11.224302 81271 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
I1025 19:21:11.224421 81271 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
I1025 19:21:11.224520 81271 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I1025 19:21:11.227356 81271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53949
I1025 19:21:11.227703 81271 main.go:141] libmachine: () Calling .GetVersion
I1025 19:21:11.228079 81271 main.go:141] libmachine: Using API Version 1
I1025 19:21:11.228090 81271 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 19:21:11.228295 81271 main.go:141] libmachine: () Calling .GetMachineName
I1025 19:21:11.228424 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:11.228528 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetState
I1025 19:21:11.228627 81271 main.go:141] libmachine: (running-upgrade-961000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I1025 19:21:11.228692 81271 main.go:141] libmachine: (running-upgrade-961000) DBG | hyperkit pid from json: 81168
I1025 19:21:11.229840 81271 fix.go:102] recreateIfNeeded on running-upgrade-961000: state=Running err=<nil>
W1025 19:21:11.229856 81271 fix.go:128] unexpected machine state, will restart: <nil>
I1025 19:21:11.271845 81271 out.go:177] * Updating the running hyperkit "running-upgrade-961000" VM ...
I1025 19:21:11.292777 81271 machine.go:88] provisioning docker machine ...
I1025 19:21:11.292795 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:11.292958 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
I1025 19:21:11.293062 81271 buildroot.go:166] provisioning hostname "running-upgrade-961000"
I1025 19:21:11.293077 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
I1025 19:21:11.293180 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.293260 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.293348 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.293427 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.293498 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.293590 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:11.294056 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:11.294065 81271 main.go:141] libmachine: About to run SSH command:
sudo hostname running-upgrade-961000 && echo "running-upgrade-961000" | sudo tee /etc/hostname
I1025 19:21:11.375021 81271 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-961000
I1025 19:21:11.375044 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.375182 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.375278 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.375372 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.375475 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.375610 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:11.375851 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:11.375866 81271 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-961000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-961000/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-961000' | sudo tee -a /etc/hosts;
fi
fi
I1025 19:21:11.452042 81271 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1025 19:21:11.452075 81271 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17491-76819/.minikube CaCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17491-76819/.minikube}
I1025 19:21:11.452097 81271 buildroot.go:174] setting up certificates
I1025 19:21:11.452112 81271 provision.go:83] configureAuth start
I1025 19:21:11.452120 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetMachineName
I1025 19:21:11.452266 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetIP
I1025 19:21:11.452369 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.452472 81271 provision.go:138] copyHostCerts
I1025 19:21:11.452540 81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem, removing ...
I1025 19:21:11.452549 81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem
I1025 19:21:11.452674 81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/ca.pem (1082 bytes)
I1025 19:21:11.452895 81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem, removing ...
I1025 19:21:11.452901 81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem
I1025 19:21:11.452974 81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/cert.pem (1123 bytes)
I1025 19:21:11.453149 81271 exec_runner.go:144] found /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem, removing ...
I1025 19:21:11.453155 81271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem
I1025 19:21:11.453232 81271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17491-76819/.minikube/key.pem (1679 bytes)
I1025 19:21:11.453381 81271 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-961000 san=[192.168.87.11 192.168.87.11 localhost 127.0.0.1 minikube running-upgrade-961000]
I1025 19:21:11.611480 81271 provision.go:172] copyRemoteCerts
I1025 19:21:11.611535 81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1025 19:21:11.611571 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.611728 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.611813 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.611894 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.611978 81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
I1025 19:21:11.654645 81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1025 19:21:11.664752 81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I1025 19:21:11.674218 81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1025 19:21:11.684536 81271 provision.go:86] duration metric: configureAuth took 232.41789ms
I1025 19:21:11.684549 81271 buildroot.go:189] setting minikube options for container-runtime
I1025 19:21:11.684672 81271 config.go:182] Loaded profile config "running-upgrade-961000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I1025 19:21:11.684705 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:11.684845 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.684948 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.685056 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.685146 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.685234 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.685372 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:11.685606 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:11.685614 81271 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1025 19:21:11.762664 81271 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I1025 19:21:11.762687 81271 buildroot.go:70] root file system type: tmpfs
I1025 19:21:11.762759 81271 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1025 19:21:11.762779 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.762929 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.763019 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.763110 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.763187 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.763298 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:11.763537 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:11.763586 81271 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1025 19:21:11.847526 81271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1025 19:21:11.847557 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:11.847707 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:11.847794 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.847893 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:11.847984 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:11.848122 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:11.848371 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:11.848385 81271 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1025 19:21:11.956672 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
I1025 19:21:12.123838 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
I1025 19:21:12.456071 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I1025 19:21:12.778774 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
I1025 19:21:13.100634 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
I1025 19:21:13.424089 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
I1025 19:21:13.711392 81271 cache.go:162] opening: /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
I1025 19:21:13.828328 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
I1025 19:21:13.828346 81271 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 2.614572919s
I1025 19:21:13.828357 81271 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
I1025 19:21:14.691715 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
I1025 19:21:14.691733 81271 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 3.477901917s
I1025 19:21:14.691742 81271 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
I1025 19:21:17.379825 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
I1025 19:21:17.379841 81271 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 6.166096182s
I1025 19:21:17.379849 81271 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
I1025 19:21:18.956815 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
I1025 19:21:18.956844 81271 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 7.743149244s
I1025 19:21:18.956855 81271 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
I1025 19:21:19.466126 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
I1025 19:21:19.466145 81271 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 8.252619502s
I1025 19:21:19.466154 81271 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
I1025 19:21:19.859265 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
I1025 19:21:19.859280 81271 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 8.645763898s
I1025 19:21:19.859288 81271 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
I1025 19:21:23.614904 81271 cache.go:157] /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
I1025 19:21:23.614926 81271 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 12.401413299s
I1025 19:21:23.614935 81271 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/17491-76819/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
I1025 19:21:23.614950 81271 cache.go:87] Successfully saved all images to host disk.
I1025 19:21:23.896255 81271 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
+++ /lib/systemd/system/docker.service.new
@@ -3,9 +3,12 @@
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
+Restart=on-failure
@@ -21,7 +24,7 @@
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I1025 19:21:23.896274 81271 machine.go:91] provisioned docker machine in 12.603812152s
I1025 19:21:23.896281 81271 start.go:300] post-start starting for "running-upgrade-961000" (driver="hyperkit")
I1025 19:21:23.896290 81271 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1025 19:21:23.896301 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:23.896530 81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1025 19:21:23.896544 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:23.896644 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:23.896733 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:23.896816 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:23.896893 81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
I1025 19:21:23.941345 81271 ssh_runner.go:195] Run: cat /etc/os-release
I1025 19:21:23.943962 81271 info.go:137] Remote host: Buildroot 2019.02.7
I1025 19:21:23.943973 81271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/addons for local assets ...
I1025 19:21:23.944056 81271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17491-76819/.minikube/files for local assets ...
I1025 19:21:23.944228 81271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem -> 772902.pem in /etc/ssl/certs
I1025 19:21:23.944411 81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1025 19:21:23.948442 81271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17491-76819/.minikube/files/etc/ssl/certs/772902.pem --> /etc/ssl/certs/772902.pem (1708 bytes)
I1025 19:21:23.957444 81271 start.go:303] post-start completed in 61.157111ms
I1025 19:21:23.957457 81271 fix.go:56] fixHost completed within 12.743323675s
I1025 19:21:23.957474 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:23.957600 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:23.957690 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:23.957786 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:23.957875 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:23.957996 81271 main.go:141] libmachine: Using SSH client type: native
I1025 19:21:23.958244 81271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13f54a0] 0x13f8180 <nil> [] 0s} 192.168.87.11 22 <nil> <nil>}
I1025 19:21:23.958252 81271 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1025 19:21:24.034169 81271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698286884.254723586
I1025 19:21:24.034184 81271 fix.go:206] guest clock: 1698286884.254723586
I1025 19:21:24.034190 81271 fix.go:219] Guest: 2023-10-25 19:21:24.254723586 -0700 PDT Remote: 2023-10-25 19:21:23.957463 -0700 PDT m=+13.402640461 (delta=297.260586ms)
I1025 19:21:24.034206 81271 fix.go:190] guest clock delta is within tolerance: 297.260586ms
I1025 19:21:24.034210 81271 start.go:83] releasing machines lock for "running-upgrade-961000", held for 12.820101842s
I1025 19:21:24.034225 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:24.034359 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetIP
I1025 19:21:24.034454 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:24.034758 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:24.034858 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .DriverName
I1025 19:21:24.034917 81271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1025 19:21:24.034951 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:24.035008 81271 ssh_runner.go:195] Run: cat /version.json
I1025 19:21:24.035022 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHHostname
I1025 19:21:24.035038 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:24.035151 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHPort
I1025 19:21:24.035165 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:24.035257 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHKeyPath
I1025 19:21:24.035270 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:24.035347 81271 main.go:141] libmachine: (running-upgrade-961000) Calling .GetSSHUsername
I1025 19:21:24.035360 81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
I1025 19:21:24.035444 81271 sshutil.go:53] new ssh client: &{IP:192.168.87.11 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17491-76819/.minikube/machines/running-upgrade-961000/id_rsa Username:docker}
W1025 19:21:24.124825 81271 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I1025 19:21:24.124902 81271 ssh_runner.go:195] Run: systemctl --version
I1025 19:21:24.130195 81271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1025 19:21:24.133765 81271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1025 19:21:24.133817 81271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I1025 19:21:24.137443 81271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I1025 19:21:24.140978 81271 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
I1025 19:21:24.141000 81271 start.go:472] detecting cgroup driver to use...
I1025 19:21:24.141098 81271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1025 19:21:24.148289 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I1025 19:21:24.152695 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1025 19:21:24.156798 81271 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I1025 19:21:24.156845 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1025 19:21:24.161555 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1025 19:21:24.165802 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1025 19:21:24.170045 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1025 19:21:24.174147 81271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1025 19:21:24.178989 81271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1025 19:21:24.183346 81271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1025 19:21:24.187068 81271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1025 19:21:24.190749 81271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 19:21:24.256538 81271 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1025 19:21:24.267075 81271 start.go:472] detecting cgroup driver to use...
I1025 19:21:24.267165 81271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1025 19:21:24.291080 81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 19:21:24.298637 81271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1025 19:21:24.314361 81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1025 19:21:24.320495 81271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1025 19:21:24.328306 81271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I1025 19:21:24.336222 81271 ssh_runner.go:195] Run: which cri-dockerd
I1025 19:21:24.338331 81271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1025 19:21:24.342252 81271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I1025 19:21:24.348763 81271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1025 19:21:24.405869 81271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1025 19:21:24.475458 81271 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
I1025 19:21:24.475547 81271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1025 19:21:24.482383 81271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1025 19:21:24.549828 81271 ssh_runner.go:195] Run: sudo systemctl restart docker
I1025 19:21:25.742930 81271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.193111097s)
I1025 19:21:25.765158 81271 out.go:177]
W1025 19:21:25.786366 81271 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W1025 19:21:25.786394 81271 out.go:239] *
*
W1025 19:21:25.787562 81271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 19:21:25.852494 81271 out.go:177]
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-961000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-25 19:21:25.90566 -0700 PDT m=+2189.939236954
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-961000 -n running-upgrade-961000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-961000 -n running-upgrade-961000: exit status 6 (145.401661ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E1025 19:21:26.043596 81384 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-961000" does not appear in /Users/jenkins/minikube-integration/17491-76819/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-961000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-961000" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-darwin-amd64 delete -p running-upgrade-961000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-961000: (1.499744428s)
--- FAIL: TestRunningBinaryUpgrade (107.82s)