=== RUN TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade
=== CONT TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.3285976803.exe start -p running-upgrade-064000 --memory=2200 --vm-driver=hyperkit
version_upgrade_test.go:132: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.6.2.3285976803.exe start -p running-upgrade-064000 --memory=2200 --vm-driver=hyperkit : (1m39.790795455s)
version_upgrade_test.go:142: (dbg) Run: out/minikube-darwin-amd64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit
E0531 14:28:23.536201 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.541991 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.552314 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.572604 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.613129 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.694039 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:23.854462 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:24.174862 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:24.815038 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:26.096439 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
E0531 14:28:28.657685 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90 (14.636859914s)
-- stdout --
* [running-upgrade-064000] minikube v1.30.1 on Darwin 13.4
- MINIKUBE_LOCATION=16577
- KUBECONFIG=/Users/jenkins/minikube-integration/16577-1168/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16577-1168/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
* Using the hyperkit driver based on existing profile
* Starting control plane node running-upgrade-064000 in cluster running-upgrade-064000
* Updating the running hyperkit "running-upgrade-064000" VM ...
-- /stdout --
** stderr **
I0531 14:28:17.721140 4975 out.go:296] Setting OutFile to fd 1 ...
I0531 14:28:17.721373 4975 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 14:28:17.721380 4975 out.go:309] Setting ErrFile to fd 2...
I0531 14:28:17.721384 4975 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0531 14:28:17.721510 4975 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16577-1168/.minikube/bin
I0531 14:28:17.722939 4975 out.go:303] Setting JSON to false
I0531 14:28:17.742104 4975 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3467,"bootTime":1685565030,"procs":394,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
W0531 14:28:17.742207 4975 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0531 14:28:17.764459 4975 out.go:177] * [running-upgrade-064000] minikube v1.30.1 on Darwin 13.4
I0531 14:28:17.807384 4975 notify.go:220] Checking for updates...
I0531 14:28:17.807423 4975 out.go:177] - MINIKUBE_LOCATION=16577
I0531 14:28:17.829264 4975 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/16577-1168/kubeconfig
I0531 14:28:17.850098 4975 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0531 14:28:17.871200 4975 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0531 14:28:17.892088 4975 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16577-1168/.minikube
I0531 14:28:17.913050 4975 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0531 14:28:17.934703 4975 config.go:182] Loaded profile config "running-upgrade-064000": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0531 14:28:17.934726 4975 start_flags.go:683] config upgrade: Driver=hyperkit
I0531 14:28:17.934739 4975 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685477270-16600@sha256:c81b94f0b25b3fcc844c9d1acd1fbfa391b977b9269dbe87eea9194ab72e03b3
I0531 14:28:17.934856 4975 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/running-upgrade-064000/config.json ...
I0531 14:28:17.936166 4975 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0531 14:28:17.936225 4975 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0531 14:28:17.943758 4975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52475
I0531 14:28:17.944113 4975 main.go:141] libmachine: () Calling .GetVersion
I0531 14:28:17.944547 4975 main.go:141] libmachine: Using API Version 1
I0531 14:28:17.944564 4975 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 14:28:17.944773 4975 main.go:141] libmachine: () Calling .GetMachineName
I0531 14:28:17.944880 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:17.966184 4975 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
I0531 14:28:17.986984 4975 driver.go:375] Setting default libvirt URI to qemu:///system
I0531 14:28:17.987497 4975 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0531 14:28:17.987553 4975 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0531 14:28:17.995449 4975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
I0531 14:28:17.995795 4975 main.go:141] libmachine: () Calling .GetVersion
I0531 14:28:17.996119 4975 main.go:141] libmachine: Using API Version 1
I0531 14:28:17.996129 4975 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 14:28:17.996328 4975 main.go:141] libmachine: () Calling .GetMachineName
I0531 14:28:17.996431 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:18.045199 4975 out.go:177] * Using the hyperkit driver based on existing profile
I0531 14:28:18.066055 4975 start.go:295] selected driver: hyperkit
I0531 14:28:18.066097 4975 start.go:870] validating driver "hyperkit" against &{Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685477270-16600@sha256:c81b94f0b25b3fcc844c9d1acd1fbfa391b977b9269dbe87eea9194ab72e03b3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v
1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP:}
I0531 14:28:18.066320 4975 start.go:881] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0531 14:28:18.070347 4975 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.070466 4975 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/16577-1168/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0531 14:28:18.077061 4975 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.30.1
I0531 14:28:18.080452 4975 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0531 14:28:18.080471 4975 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0531 14:28:18.080548 4975 cni.go:84] Creating CNI manager for ""
I0531 14:28:18.080564 4975 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0531 14:28:18.080572 4975 start_flags.go:319] config:
{Name:running-upgrade-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1685477270-16600@sha256:c81b94f0b25b3fcc844c9d1acd1fbfa391b977b9269dbe87eea9194ab72e03b3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.25 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0531 14:28:18.080713 4975 iso.go:125] acquiring lock: {Name:mk11293a266cf92385db01b91fa0d3b855a83688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.123246 4975 out.go:177] * Starting control plane node running-upgrade-064000 in cluster running-upgrade-064000
I0531 14:28:18.144104 4975 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W0531 14:28:18.202541 4975 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0531 14:28:18.202753 4975 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/running-upgrade-064000/config.json ...
I0531 14:28:18.202909 4975 cache.go:107] acquiring lock: {Name:mk6acda4cfe64a912a63e6eb9d8128777a299d13 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.202936 4975 cache.go:107] acquiring lock: {Name:mkc09d1041791a0672ca7bb6732603f6f2725b43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.202946 4975 cache.go:107] acquiring lock: {Name:mk7c0b572a5b24f9e9a1a32d991c1e20272df879 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203100 4975 cache.go:107] acquiring lock: {Name:mk12cb9bb91d29c567e297852ccd8613b0807616 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203112 4975 cache.go:107] acquiring lock: {Name:mkfa59d0c2e0ecf517984106d7008d509e2d6d0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203200 4975 cache.go:115] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0531 14:28:18.203229 4975 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 331.942µs
I0531 14:28:18.203259 4975 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0531 14:28:18.203309 4975 cache.go:107] acquiring lock: {Name:mkdc5a83357590e406794d2e795ebe7677bff7a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203355 4975 cache.go:107] acquiring lock: {Name:mk354873d0d68824fffd64f609b1620947f36e1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203356 4975 cache.go:107] acquiring lock: {Name:mk93159fc5ffac34fea983e6c3d842a6fcdce350 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0531 14:28:18.203479 4975 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
I0531 14:28:18.203480 4975 image.go:134] retrieving image: registry.k8s.io/pause:3.1
I0531 14:28:18.203574 4975 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
I0531 14:28:18.203716 4975 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
I0531 14:28:18.203819 4975 cache.go:195] Successfully downloaded all kic artifacts
I0531 14:28:18.203853 4975 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
I0531 14:28:18.203897 4975 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
I0531 14:28:18.203909 4975 start.go:364] acquiring machines lock for running-upgrade-064000: {Name:mkd155d6464a42be13fd4f179b235ad9633d36bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0531 14:28:18.204040 4975 start.go:368] acquired machines lock for "running-upgrade-064000" in 105.402µs
I0531 14:28:18.204072 4975 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
I0531 14:28:18.204097 4975 start.go:96] Skipping create...Using existing machine configuration
I0531 14:28:18.204122 4975 fix.go:55] fixHost starting: minikube
I0531 14:28:18.204713 4975 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0531 14:28:18.204751 4975 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0531 14:28:18.210775 4975 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
I0531 14:28:18.210834 4975 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
I0531 14:28:18.210833 4975 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
I0531 14:28:18.210831 4975 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
I0531 14:28:18.210878 4975 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
I0531 14:28:18.211123 4975 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
I0531 14:28:18.211185 4975 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
I0531 14:28:18.214257 4975 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52479
I0531 14:28:18.214588 4975 main.go:141] libmachine: () Calling .GetVersion
I0531 14:28:18.214966 4975 main.go:141] libmachine: Using API Version 1
I0531 14:28:18.214978 4975 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 14:28:18.215240 4975 main.go:141] libmachine: () Calling .GetMachineName
I0531 14:28:18.215348 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:18.215438 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetState
I0531 14:28:18.215523 4975 main.go:141] libmachine: (running-upgrade-064000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0531 14:28:18.215591 4975 main.go:141] libmachine: (running-upgrade-064000) DBG | hyperkit pid from json: 4875
I0531 14:28:18.216496 4975 fix.go:103] recreateIfNeeded on running-upgrade-064000: state=Running err=<nil>
W0531 14:28:18.216510 4975 fix.go:129] unexpected machine state, will restart: <nil>
I0531 14:28:18.259073 4975 out.go:177] * Updating the running hyperkit "running-upgrade-064000" VM ...
I0531 14:28:18.281029 4975 machine.go:88] provisioning docker machine ...
I0531 14:28:18.281048 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:18.281221 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetMachineName
I0531 14:28:18.281318 4975 buildroot.go:166] provisioning hostname "running-upgrade-064000"
I0531 14:28:18.281332 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetMachineName
I0531 14:28:18.281436 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.281540 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.281646 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.281749 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.281843 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.281972 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:18.282364 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:18.282375 4975 main.go:141] libmachine: About to run SSH command:
sudo hostname running-upgrade-064000 && echo "running-upgrade-064000" | sudo tee /etc/hostname
I0531 14:28:18.358548 4975 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-064000
I0531 14:28:18.358579 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.358736 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.358838 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.358973 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.359084 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.359211 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:18.359547 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:18.359562 4975 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-064000' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-064000/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-064000' | sudo tee -a /etc/hosts;
fi
fi
I0531 14:28:18.430118 4975 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0531 14:28:18.430142 4975 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16577-1168/.minikube CaCertPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16577-1168/.minikube}
I0531 14:28:18.430164 4975 buildroot.go:174] setting up certificates
I0531 14:28:18.430198 4975 provision.go:83] configureAuth start
I0531 14:28:18.430209 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetMachineName
I0531 14:28:18.430341 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetIP
I0531 14:28:18.430420 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.430500 4975 provision.go:138] copyHostCerts
I0531 14:28:18.430571 4975 exec_runner.go:144] found /Users/jenkins/minikube-integration/16577-1168/.minikube/key.pem, removing ...
I0531 14:28:18.430582 4975 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16577-1168/.minikube/key.pem
I0531 14:28:18.430714 4975 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16577-1168/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16577-1168/.minikube/key.pem (1675 bytes)
I0531 14:28:18.430928 4975 exec_runner.go:144] found /Users/jenkins/minikube-integration/16577-1168/.minikube/ca.pem, removing ...
I0531 14:28:18.430934 4975 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16577-1168/.minikube/ca.pem
I0531 14:28:18.431007 4975 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16577-1168/.minikube/ca.pem (1078 bytes)
I0531 14:28:18.431165 4975 exec_runner.go:144] found /Users/jenkins/minikube-integration/16577-1168/.minikube/cert.pem, removing ...
I0531 14:28:18.431171 4975 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16577-1168/.minikube/cert.pem
I0531 14:28:18.431233 4975 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16577-1168/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16577-1168/.minikube/cert.pem (1123 bytes)
I0531 14:28:18.431377 4975 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16577-1168/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-064000 san=[192.168.64.25 192.168.64.25 localhost 127.0.0.1 minikube running-upgrade-064000]
I0531 14:28:18.504517 4975 provision.go:172] copyRemoteCerts
I0531 14:28:18.504588 4975 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0531 14:28:18.504612 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.504760 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.504855 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.504935 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.505028 4975 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
I0531 14:28:18.544434 4975 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16577-1168/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
I0531 14:28:18.554217 4975 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16577-1168/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0531 14:28:18.563416 4975 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16577-1168/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0531 14:28:18.572568 4975 provision.go:86] duration metric: configureAuth took 142.35519ms
I0531 14:28:18.572583 4975 buildroot.go:189] setting minikube options for container-runtime
I0531 14:28:18.572712 4975 config.go:182] Loaded profile config "running-upgrade-064000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0531 14:28:18.572726 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:18.572862 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.572968 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.573054 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.573140 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.573221 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.573324 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:18.573622 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:18.573630 4975 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0531 14:28:18.644942 4975 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
I0531 14:28:18.644963 4975 buildroot.go:70] root file system type: tmpfs
I0531 14:28:18.645054 4975 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0531 14:28:18.645070 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.645206 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.645290 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.645384 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.645478 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.645578 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:18.645885 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:18.645947 4975 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0531 14:28:18.722424 4975 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0531 14:28:18.722448 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:18.722586 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:18.722677 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.722769 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:18.722894 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:18.723017 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:18.723326 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:18.723342 4975 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0531 14:28:19.471942 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
I0531 14:28:19.656465 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
I0531 14:28:19.826476 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
I0531 14:28:19.897047 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
I0531 14:28:20.040755 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
I0531 14:28:20.327029 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
I0531 14:28:20.464570 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
I0531 14:28:20.464583 4975 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 2.261605107s
I0531 14:28:20.464591 4975 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
I0531 14:28:20.620299 4975 cache.go:162] opening: /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
I0531 14:28:22.054110 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
I0531 14:28:22.054127 4975 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 3.850960674s
I0531 14:28:22.054135 4975 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
I0531 14:28:24.548651 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
I0531 14:28:24.548669 4975 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 6.345757467s
I0531 14:28:24.548678 4975 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
I0531 14:28:25.476804 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
I0531 14:28:25.476818 4975 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 7.274051639s
I0531 14:28:25.476838 4975 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
I0531 14:28:25.934391 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
I0531 14:28:25.934406 4975 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 7.731638051s
I0531 14:28:25.934414 4975 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
I0531 14:28:27.451184 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
I0531 14:28:27.451199 4975 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 9.248090174s
I0531 14:28:27.451207 4975 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
I0531 14:28:27.825989 4975 cache.go:157] /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
I0531 14:28:27.826005 4975 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 9.622971968s
I0531 14:28:27.826035 4975 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/16577-1168/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
I0531 14:28:27.826058 4975 cache.go:87] Successfully saved all images to host disk.
I0531 14:28:30.445738 4975 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
+++ /lib/systemd/system/docker.service.new
@@ -3,9 +3,12 @@
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
+Restart=on-failure
@@ -21,7 +24,7 @@
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0531 14:28:30.445767 4975 machine.go:91] provisioned docker machine in 12.164946101s
I0531 14:28:30.445777 4975 start.go:300] post-start starting for "running-upgrade-064000" (driver="hyperkit")
I0531 14:28:30.445783 4975 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0531 14:28:30.445792 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:30.446021 4975 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0531 14:28:30.446037 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:30.446146 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:30.446251 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:30.446346 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:30.446436 4975 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
I0531 14:28:30.485756 4975 ssh_runner.go:195] Run: cat /etc/os-release
I0531 14:28:30.488197 4975 info.go:137] Remote host: Buildroot 2019.02.7
I0531 14:28:30.488208 4975 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16577-1168/.minikube/addons for local assets ...
I0531 14:28:30.488280 4975 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16577-1168/.minikube/files for local assets ...
I0531 14:28:30.488427 4975 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16577-1168/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
I0531 14:28:30.488587 4975 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0531 14:28:30.492364 4975 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16577-1168/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
I0531 14:28:30.501432 4975 start.go:303] post-start completed in 55.648048ms
I0531 14:28:30.501444 4975 fix.go:57] fixHost completed within 12.297560777s
I0531 14:28:30.501460 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:30.501590 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:30.501682 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:30.501787 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:30.501878 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:30.501989 4975 main.go:141] libmachine: Using SSH client type: native
I0531 14:28:30.502294 4975 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140c4a0] 0x140f540 <nil> [] 0s} 192.168.64.25 22 <nil> <nil>}
I0531 14:28:30.502302 4975 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0531 14:28:30.574615 4975 main.go:141] libmachine: SSH cmd err, output: <nil>: 1685568510.705814298
I0531 14:28:30.574627 4975 fix.go:207] guest clock: 1685568510.705814298
I0531 14:28:30.574634 4975 fix.go:220] Guest: 2023-05-31 14:28:30.705814298 -0700 PDT Remote: 2023-05-31 14:28:30.501449 -0700 PDT m=+12.812713758 (delta=204.365298ms)
I0531 14:28:30.574648 4975 fix.go:191] guest clock delta is within tolerance: 204.365298ms
I0531 14:28:30.574652 4975 start.go:83] releasing machines lock for "running-upgrade-064000", held for 12.370823099s
I0531 14:28:30.574671 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:30.574799 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetIP
I0531 14:28:30.574895 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:30.575197 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:30.575300 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .DriverName
I0531 14:28:30.575366 4975 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0531 14:28:30.575410 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:30.575434 4975 ssh_runner.go:195] Run: cat /version.json
I0531 14:28:30.575444 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHHostname
I0531 14:28:30.575517 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:30.575543 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHPort
I0531 14:28:30.575612 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:30.575639 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHKeyPath
I0531 14:28:30.575699 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:30.575718 4975 main.go:141] libmachine: (running-upgrade-064000) Calling .GetSSHUsername
I0531 14:28:30.575793 4975 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
I0531 14:28:30.575812 4975 sshutil.go:53] new ssh client: &{IP:192.168.64.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16577-1168/.minikube/machines/running-upgrade-064000/id_rsa Username:docker}
W0531 14:28:30.617539 4975 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
stdout:
stderr:
cat: /version.json: No such file or directory
I0531 14:28:30.617602 4975 ssh_runner.go:195] Run: systemctl --version
I0531 14:28:30.666809 4975 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0531 14:28:30.670825 4975 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0531 14:28:30.670871 4975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0531 14:28:30.674327 4975 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0531 14:28:30.677905 4975 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
I0531 14:28:30.677916 4975 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I0531 14:28:30.677929 4975 start.go:481] detecting cgroup driver to use...
I0531 14:28:30.677993 4975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0531 14:28:30.685922 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
I0531 14:28:30.690093 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0531 14:28:30.694158 4975 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0531 14:28:30.694197 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0531 14:28:30.700619 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0531 14:28:30.704679 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0531 14:28:30.708872 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0531 14:28:30.712972 4975 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0531 14:28:30.717552 4975 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0531 14:28:30.721546 4975 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0531 14:28:30.725146 4975 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0531 14:28:30.728582 4975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0531 14:28:30.785527 4975 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0531 14:28:30.801202 4975 start.go:481] detecting cgroup driver to use...
I0531 14:28:30.801282 4975 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0531 14:28:30.809381 4975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0531 14:28:30.816506 4975 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0531 14:28:30.833976 4975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0531 14:28:30.839992 4975 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0531 14:28:30.846865 4975 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0531 14:28:30.854281 4975 ssh_runner.go:195] Run: which cri-dockerd
I0531 14:28:30.856414 4975 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0531 14:28:30.860241 4975 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0531 14:28:30.866867 4975 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0531 14:28:30.926340 4975 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0531 14:28:30.997789 4975 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
I0531 14:28:30.997807 4975 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0531 14:28:31.004585 4975 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0531 14:28:31.078130 4975 ssh_runner.go:195] Run: sudo systemctl restart docker
I0531 14:28:32.132323 4975 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.054191372s)
I0531 14:28:32.156030 4975 out.go:177]
W0531 14:28:32.176092 4975 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
W0531 14:28:32.176122 4975 out.go:239] *
*
W0531 14:28:32.177409 4975 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0531 14:28:32.295011 4975 out.go:177]
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-064000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-31 14:28:32.327416 -0700 PDT m=+2211.255709937
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-064000 -n running-upgrade-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-064000 -n running-upgrade-064000: exit status 6 (138.22674ms)
-- stdout --
Running
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
-- /stdout --
** stderr **
E0531 14:28:32.448855 5089 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-064000" does not appear in /Users/jenkins/minikube-integration/16577-1168/kubeconfig
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-064000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-064000" profile ...
helpers_test.go:178: (dbg) Run: out/minikube-darwin-amd64 delete -p running-upgrade-064000
E0531 14:28:33.778227 1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16577-1168/.minikube/profiles/skaffold-369000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-064000: (1.433945616s)
--- FAIL: TestRunningBinaryUpgrade (118.92s)