=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-darwin-amd64 start -p pause-20220921152522-3535 --alsologtostderr -v=1 --driver=hyperkit
E0921 15:26:22.139035 3535 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/skaffold-20220921151524-3535/client.crt: no such file or directory
E0921 15:26:49.831517 3535 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/skaffold-20220921151524-3535/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220921152522-3535 --alsologtostderr -v=1 --driver=hyperkit : (1m13.31014205s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20220921152522-3535] minikube v1.27.0 on Darwin 12.6
- MINIKUBE_LOCATION=14995
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube
* Using the hyperkit driver based on existing profile
* Starting control plane node pause-20220921152522-3535 in cluster pause-20220921152522-3535
* Updating the running hyperkit "pause-20220921152522-3535" VM ...
* Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-20220921152522-3535" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0921 15:26:16.412297 10408 out.go:296] Setting OutFile to fd 1 ...
I0921 15:26:16.412857 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.412883 10408 out.go:309] Setting ErrFile to fd 2...
I0921 15:26:16.412925 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.413172 10408 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
I0921 15:26:16.413935 10408 out.go:303] Setting JSON to false
I0921 15:26:16.429337 10408 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5147,"bootTime":1663794029,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0921 15:26:16.429439 10408 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0921 15:26:16.451061 10408 out.go:177] * [pause-20220921152522-3535] minikube v1.27.0 on Darwin 12.6
I0921 15:26:16.492895 10408 notify.go:214] Checking for updates...
I0921 15:26:16.513942 10408 out.go:177] - MINIKUBE_LOCATION=14995
I0921 15:26:16.535147 10408 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:26:16.555899 10408 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0921 15:26:16.577004 10408 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0921 15:26:16.598036 10408 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube
I0921 15:26:16.619232 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:16.619572 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.619620 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.626042 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52950
I0921 15:26:16.626541 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.626992 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.627004 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.627211 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.627372 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.627501 10408 driver.go:365] Setting default libvirt URI to qemu:///system
I0921 15:26:16.627783 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.627806 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.634000 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52952
I0921 15:26:16.634367 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.634679 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.634691 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.634960 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.635067 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.661930 10408 out.go:177] * Using the hyperkit driver based on existing profile
I0921 15:26:16.703890 10408 start.go:284] selected driver: hyperkit
I0921 15:26:16.703910 10408 start.go:808] validating driver "hyperkit" against &{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterNam
e:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.704025 10408 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0921 15:26:16.704092 10408 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.704203 10408 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0921 15:26:16.710571 10408 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.27.0
I0921 15:26:16.713621 10408 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.713649 10408 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0921 15:26:16.715630 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:16.715647 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:16.715664 10408 start_flags.go:316] config:
{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.715818 10408 iso.go:124] acquiring lock: {Name:mke8c57399926d29e846b47dd4be4625ba5fcaea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.774023 10408 out.go:177] * Starting control plane node pause-20220921152522-3535 in cluster pause-20220921152522-3535
I0921 15:26:16.794876 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:16.794956 10408 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
I0921 15:26:16.795012 10408 cache.go:57] Caching tarball of preloaded images
I0921 15:26:16.795122 10408 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0921 15:26:16.795144 10408 cache.go:60] Finished verifying existence of preloaded tar for v1.25.2 on docker
I0921 15:26:16.795239 10408 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/config.json ...
I0921 15:26:16.795594 10408 cache.go:208] Successfully downloaded all kic artifacts
I0921 15:26:16.795620 10408 start.go:364] acquiring machines lock for pause-20220921152522-3535: {Name:mk2f7774d81f069136708da9f7558413d7930511 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0921 15:26:19.803647 10408 start.go:368] acquired machines lock for "pause-20220921152522-3535" in 3.008011859s
I0921 15:26:19.803693 10408 start.go:96] Skipping create...Using existing machine configuration
I0921 15:26:19.803704 10408 fix.go:55] fixHost starting:
I0921 15:26:19.804014 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:19.804040 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:19.810489 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52975
I0921 15:26:19.810845 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:19.811156 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:19.811167 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:19.811357 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:19.811458 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.811557 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:26:19.811664 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:19.811739 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:26:19.812542 10408 fix.go:103] recreateIfNeeded on pause-20220921152522-3535: state=Running err=<nil>
W0921 15:26:19.812564 10408 fix.go:129] unexpected machine state, will restart: <nil>
I0921 15:26:19.835428 10408 out.go:177] * Updating the running hyperkit "pause-20220921152522-3535" VM ...
I0921 15:26:19.856170 10408 machine.go:88] provisioning docker machine ...
I0921 15:26:19.856192 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.856377 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856478 10408 buildroot.go:166] provisioning hostname "pause-20220921152522-3535"
I0921 15:26:19.856489 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856574 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.856646 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.856744 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856835 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856914 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.857028 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.857193 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.857203 10408 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220921152522-3535 && echo "pause-20220921152522-3535" | sudo tee /etc/hostname
I0921 15:26:19.929633 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220921152522-3535
I0921 15:26:19.929693 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.929883 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.930020 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930143 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930253 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.930438 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.930577 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.930595 10408 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220921152522-3535' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220921152522-3535/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220921152522-3535' | sudo tee -a /etc/hosts;
fi
fi
I0921 15:26:19.992780 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:19.992803 10408 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
I0921 15:26:19.992832 10408 buildroot.go:174] setting up certificates
I0921 15:26:19.992843 10408 provision.go:83] configureAuth start
I0921 15:26:19.992852 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.993017 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:19.993132 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.993213 10408 provision.go:138] copyHostCerts
I0921 15:26:19.993302 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
I0921 15:26:19.993310 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
I0921 15:26:19.993450 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
I0921 15:26:19.993643 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
I0921 15:26:19.993649 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
I0921 15:26:19.993780 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1679 bytes)
I0921 15:26:19.994087 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
I0921 15:26:19.994094 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
I0921 15:26:19.994203 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
I0921 15:26:19.994341 10408 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.pause-20220921152522-3535 san=[192.168.64.28 192.168.64.28 localhost 127.0.0.1 minikube pause-20220921152522-3535]
I0921 15:26:20.145157 10408 provision.go:172] copyRemoteCerts
I0921 15:26:20.145229 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0921 15:26:20.145247 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.145395 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.145492 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.145591 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.145687 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.181860 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0921 15:26:20.204288 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0921 15:26:20.223046 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0921 15:26:20.242859 10408 provision.go:86] duration metric: configureAuth took 250.000259ms
I0921 15:26:20.242872 10408 buildroot.go:189] setting minikube options for container-runtime
I0921 15:26:20.243031 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:20.243050 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.243218 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.243320 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.243440 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243555 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243661 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.243798 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.243914 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.243922 10408 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0921 15:26:20.307004 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0921 15:26:20.307030 10408 buildroot.go:70] root file system type: tmpfs
I0921 15:26:20.307188 10408 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0921 15:26:20.307206 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.307379 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.307501 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307587 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307679 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.307823 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.307954 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.308011 10408 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0921 15:26:20.380017 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0921 15:26:20.380044 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.380193 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.380302 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380410 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380514 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.380665 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.380781 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.380797 10408 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0921 15:26:20.447616 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:20.447629 10408 machine.go:91] provisioned docker machine in 591.445478ms
I0921 15:26:20.447641 10408 start.go:300] post-start starting for "pause-20220921152522-3535" (driver="hyperkit")
I0921 15:26:20.447646 10408 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0921 15:26:20.447659 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.447885 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0921 15:26:20.447901 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.448051 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.448156 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.448291 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.448405 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.484862 10408 ssh_runner.go:195] Run: cat /etc/os-release
I0921 15:26:20.487726 10408 info.go:137] Remote host: Buildroot 2021.02.12
I0921 15:26:20.487742 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
I0921 15:26:20.487867 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
I0921 15:26:20.488046 10408 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem -> 35352.pem in /etc/ssl/certs
I0921 15:26:20.488202 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0921 15:26:20.495074 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:20.515167 10408 start.go:303] post-start completed in 67.502258ms
I0921 15:26:20.515187 10408 fix.go:57] fixHost completed within 711.484594ms
I0921 15:26:20.515203 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.515368 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.515520 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515638 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515770 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.515941 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.516053 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.516063 10408 main.go:134] libmachine: About to run SSH command:
date +%s.%N
I0921 15:26:20.577712 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: 1663799180.686854068
I0921 15:26:20.577735 10408 fix.go:207] guest clock: 1663799180.686854068
I0921 15:26:20.577746 10408 fix.go:220] Guest: 2022-09-21 15:26:20.686854068 -0700 PDT Remote: 2022-09-21 15:26:20.51519 -0700 PDT m=+4.146234536 (delta=171.664068ms)
I0921 15:26:20.577765 10408 fix.go:191] guest clock delta is within tolerance: 171.664068ms
I0921 15:26:20.577770 10408 start.go:83] releasing machines lock for "pause-20220921152522-3535", held for 774.111447ms
I0921 15:26:20.577789 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.577928 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:20.578042 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578174 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578318 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578705 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578906 10408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0921 15:26:20.578961 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.578984 10408 ssh_runner.go:195] Run: systemctl --version
I0921 15:26:20.578999 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.579066 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579106 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579182 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579228 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579290 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579338 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579415 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.579448 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.650058 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:20.650150 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:20.668593 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:20.668610 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:20.668676 10408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0921 15:26:20.679656 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0921 15:26:20.692651 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:20.702013 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0921 15:26:20.715942 10408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0921 15:26:20.844184 10408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0921 15:26:20.974988 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:21.117162 10408 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:29.173173 10408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.055980768s)
I0921 15:26:29.173240 10408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0921 15:26:29.288535 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:29.417731 10408 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0921 15:26:29.433270 10408 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0921 15:26:29.433356 10408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0921 15:26:29.447293 10408 start.go:471] Will wait 60s for crictl version
I0921 15:26:29.447353 10408 ssh_runner.go:195] Run: sudo crictl version
I0921 15:26:29.482799 10408 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.18
RuntimeApiVersion: 1.41.0
I0921 15:26:29.482858 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.651357 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.808439 10408 out.go:204] * Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
I0921 15:26:29.808534 10408 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0921 15:26:29.818111 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:29.818177 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.873620 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.873633 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:29.873699 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.929931 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.929952 10408 cache_images.go:84] Images are preloaded, skipping loading
I0921 15:26:29.930056 10408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0921 15:26:30.064287 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:30.064305 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:30.064320 10408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0921 15:26:30.064331 10408 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.28 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220921152522-3535 NodeName:pause-20220921152522-3535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.28 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0921 15:26:30.064423 10408 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.28
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220921152522-3535"
kubeletExtraArgs:
node-ip: 192.168.64.28
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.28"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0921 15:26:30.064505 10408 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220921152522-3535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.28 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0921 15:26:30.064579 10408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
I0921 15:26:30.076550 10408 binaries.go:44] Found k8s binaries, skipping transfer
I0921 15:26:30.076638 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0921 15:26:30.090012 10408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0921 15:26:30.137803 10408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0921 15:26:30.178146 10408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0921 15:26:30.203255 10408 ssh_runner.go:195] Run: grep 192.168.64.28 control-plane.minikube.internal$ /etc/hosts
I0921 15:26:30.209779 10408 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535 for IP: 192.168.64.28
I0921 15:26:30.209879 10408 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
I0921 15:26:30.209934 10408 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
I0921 15:26:30.210019 10408 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.key
I0921 15:26:30.210082 10408 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key.6733b561
I0921 15:26:30.210133 10408 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key
I0921 15:26:30.210333 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem (1338 bytes)
W0921 15:26:30.210375 10408 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535_empty.pem, impossibly tiny 0 bytes
I0921 15:26:30.210388 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
I0921 15:26:30.210421 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
I0921 15:26:30.210453 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
I0921 15:26:30.210483 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1679 bytes)
I0921 15:26:30.210550 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:30.211086 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0921 15:26:30.279069 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0921 15:26:30.343250 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0921 15:26:30.413180 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0921 15:26:30.448798 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0921 15:26:30.476175 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0921 15:26:30.497204 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0921 15:26:30.524103 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0921 15:26:30.558966 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /usr/share/ca-certificates/35352.pem (1708 bytes)
I0921 15:26:30.576319 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0921 15:26:30.592912 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem --> /usr/share/ca-certificates/3535.pem (1338 bytes)
I0921 15:26:30.609099 10408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0921 15:26:30.627179 10408 ssh_runner.go:195] Run: openssl version
I0921 15:26:30.632801 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3535.pem && ln -fs /usr/share/ca-certificates/3535.pem /etc/ssl/certs/3535.pem"
I0921 15:26:30.641473 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645794 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:31 /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645836 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3535.pem
I0921 15:26:30.649794 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3535.pem /etc/ssl/certs/51391683.0"
I0921 15:26:30.657630 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35352.pem && ln -fs /usr/share/ca-certificates/35352.pem /etc/ssl/certs/35352.pem"
I0921 15:26:30.665747 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669804 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:31 /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669850 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35352.pem
I0921 15:26:30.679638 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35352.pem /etc/ssl/certs/3ec20f2e.0"
I0921 15:26:30.700907 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0921 15:26:30.734369 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762750 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:27 /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762827 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.777627 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0921 15:26:30.785856 10408 kubeadm.go:396] StartCluster: {Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522
-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:30.785963 10408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:26:30.816264 10408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0921 15:26:30.823179 10408 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0921 15:26:30.823195 10408 kubeadm.go:627] restartCluster start
I0921 15:26:30.823236 10408 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0921 15:26:30.837045 10408 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0921 15:26:30.837457 10408 kubeconfig.go:92] found "pause-20220921152522-3535" server: "https://192.168.64.28:8443"
I0921 15:26:30.837839 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:26:30.838375 10408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0921 15:26:30.852535 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:30.852588 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:30.868059 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:30.876185 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:30.876238 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:30.912452 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:30.912472 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:35.914013 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:35.914061 10408 retry.go:31] will retry after 263.082536ms: state is "Stopped"
I0921 15:26:36.179260 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:41.180983 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:41.181007 10408 retry.go:31] will retry after 381.329545ms: state is "Stopped"
I0921 15:26:41.563913 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:46.564586 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:46.766257 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:46.766358 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:46.776615 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:46.782756 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:46.782801 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:46.789298 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:46.789309 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.288815 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": read tcp 192.168.64.1:52998->192.168.64.28:8443: read: connection reset by peer
I0921 15:26:51.288848 10408 retry.go:31] will retry after 242.214273ms: state is "Stopped"
I0921 15:26:51.532207 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.632400 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:51.632425 10408 retry.go:31] will retry after 300.724609ms: state is "Stopped"
I0921 15:26:51.934415 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.035144 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.035176 10408 retry.go:31] will retry after 427.113882ms: state is "Stopped"
I0921 15:26:52.464328 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.566391 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.566426 10408 retry.go:31] will retry after 382.2356ms: state is "Stopped"
I0921 15:26:52.948987 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.049570 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.049605 10408 retry.go:31] will retry after 505.529557ms: state is "Stopped"
I0921 15:26:53.556334 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.658245 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.658268 10408 retry.go:31] will retry after 609.195524ms: state is "Stopped"
I0921 15:26:54.269593 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:54.371296 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:54.371340 10408 retry.go:31] will retry after 858.741692ms: state is "Stopped"
I0921 15:26:55.230116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:55.331214 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:55.331251 10408 retry.go:31] will retry after 1.201160326s: state is "Stopped"
I0921 15:26:56.533116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:56.635643 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:56.635670 10408 retry.go:31] will retry after 1.723796097s: state is "Stopped"
I0921 15:26:58.359704 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:58.461478 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:58.461505 10408 retry.go:31] will retry after 1.596532639s: state is "Stopped"
I0921 15:27:00.059136 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:00.159945 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:27:00.159971 10408 api_server.go:165] Checking apiserver status ...
I0921 15:27:00.160018 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0921 15:27:00.169632 10408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0921 15:27:00.169647 10408 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I0921 15:27:00.169656 10408 kubeadm.go:1114] stopping kube-system containers ...
I0921 15:27:00.169722 10408 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:27:00.201882 10408 docker.go:443] Stopping containers: [d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49]
I0921 15:27:00.201952 10408 ssh_runner.go:195] Run: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49
I0921 15:27:05.344188 10408 ssh_runner.go:235] Completed: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49: (5.142213633s)
I0921 15:27:05.344244 10408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0921 15:27:05.419551 10408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0921 15:27:05.433375 10408 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Sep 21 22:25 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Sep 21 22:25 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2039 Sep 21 22:25 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Sep 21 22:25 /etc/kubernetes/scheduler.conf
I0921 15:27:05.433432 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0921 15:27:05.439704 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0921 15:27:05.445874 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0921 15:27:05.453215 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.453270 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0921 15:27:05.459417 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0921 15:27:05.465309 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.465358 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0921 15:27:05.476008 10408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484410 10408 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484426 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:05.534434 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.469884 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.628867 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.698897 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.759299 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:06.759353 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:06.778540 10408 api_server.go:71] duration metric: took 19.241402ms to wait for apiserver process to appear ...
I0921 15:27:06.778552 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:06.778559 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:11.780440 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:27:12.280518 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.000183 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0921 15:27:14.000198 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0921 15:27:14.282668 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.289281 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.289293 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:14.780762 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.786529 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.786540 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:15.280930 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:15.288106 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:15.292969 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:15.292981 10408 api_server.go:130] duration metric: took 8.514415313s to wait for apiserver health ...
I0921 15:27:15.292986 10408 cni.go:95] Creating CNI manager for ""
I0921 15:27:15.292994 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:27:15.293004 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:15.298309 10408 system_pods.go:59] 6 kube-system pods found
I0921 15:27:15.298324 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:15.298330 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0921 15:27:15.298335 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0921 15:27:15.298340 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0921 15:27:15.298344 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:15.298348 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0921 15:27:15.298352 10408 system_pods.go:74] duration metric: took 5.344262ms to wait for pod list to return data ...
I0921 15:27:15.298357 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:15.300304 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:15.300319 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:15.300328 10408 node_conditions.go:105] duration metric: took 1.967816ms to run NodePressure ...
I0921 15:27:15.300342 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:15.402185 10408 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0921 15:27:15.405062 10408 kubeadm.go:778] kubelet initialised
I0921 15:27:15.405072 10408 kubeadm.go:779] duration metric: took 2.873657ms waiting for restarted kubelet to initialise ...
I0921 15:27:15.405080 10408 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:15.408132 10408 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411452 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:15.411459 10408 pod_ready.go:81] duration metric: took 3.317632ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411465 10408 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:17.420289 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:19.421503 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:21.919889 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:24.419226 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:25.920028 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.920043 10408 pod_ready.go:81] duration metric: took 10.508561161s waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.920049 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923063 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.923071 10408 pod_ready.go:81] duration metric: took 3.017613ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923077 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926284 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.926292 10408 pod_ready.go:81] duration metric: took 3.20987ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926297 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929448 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.929456 10408 pod_ready.go:81] duration metric: took 3.154194ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929461 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932599 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.932606 10408 pod_ready.go:81] duration metric: took 3.140486ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932610 10408 pod_ready.go:38] duration metric: took 10.527510396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:25.932619 10408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0921 15:27:25.939997 10408 ops.go:34] apiserver oom_adj: -16
I0921 15:27:25.940008 10408 kubeadm.go:631] restartCluster took 55.116747244s
I0921 15:27:25.940013 10408 kubeadm.go:398] StartCluster complete in 55.154103553s
I0921 15:27:25.940027 10408 settings.go:142] acquiring lock: {Name:mkb00f1de0b91d8f67bd982eab088d27845674b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.940102 10408 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:27:25.941204 10408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mka2f83e1cbd4124ff7179732fbb172d977cf2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.942042 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:25.944188 10408 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220921152522-3535" rescaled to 1
I0921 15:27:25.944221 10408 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0921 15:27:25.944255 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0921 15:27:25.944277 10408 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0921 15:27:25.944378 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:27:25.967437 10408 addons.go:65] Setting storage-provisioner=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967440 10408 addons.go:65] Setting default-storageclass=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967359 10408 out.go:177] * Verifying Kubernetes components...
I0921 15:27:25.967453 10408 addons.go:153] Setting addon storage-provisioner=true in "pause-20220921152522-3535"
I0921 15:27:25.967457 10408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220921152522-3535"
W0921 15:27:25.967460 10408 addons.go:162] addon storage-provisioner should already be in state true
I0921 15:27:26.012377 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:26.012436 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.012762 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012761 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012794 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.012829 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.019897 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53028
I0921 15:27:26.020028 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53029
I0921 15:27:26.020328 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020394 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020706 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020719 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020801 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020817 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020929 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021015 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021115 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.021203 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.021283 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.021419 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.021443 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.023750 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:26.027574 10408 addons.go:153] Setting addon default-storageclass=true in "pause-20220921152522-3535"
W0921 15:27:26.027587 10408 addons.go:162] addon default-storageclass should already be in state true
I0921 15:27:26.027606 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.027788 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53032
I0921 15:27:26.027854 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.027880 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.028560 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.029753 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.029767 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.030003 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.030113 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.030207 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.030282 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.031135 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.034331 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53034
I0921 15:27:26.055199 10408 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0921 15:27:26.038435 10408 node_ready.go:35] waiting up to 6m0s for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.038466 10408 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0921 15:27:26.055642 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.075151 10408 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.075161 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0921 15:27:26.075184 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.075306 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.075441 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.075451 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.075455 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.075546 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.075643 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.075669 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.076075 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.076097 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.082485 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53037
I0921 15:27:26.082858 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.083217 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.083234 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.083443 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.083534 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.083608 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.083699 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.084503 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.084648 10408 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.084657 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0921 15:27:26.084665 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.084734 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.084830 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.084916 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.085010 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.117393 10408 node_ready.go:49] node "pause-20220921152522-3535" has status "Ready":"True"
I0921 15:27:26.117403 10408 node_ready.go:38] duration metric: took 42.373374ms waiting for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.117410 10408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:26.127239 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.137634 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.319821 10408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.697611 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697627 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697784 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697793 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697804 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697836 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.697938 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697946 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697962 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712622 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712636 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712825 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712834 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712839 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712844 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712846 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712954 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712962 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712969 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712973 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712981 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.713114 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.713128 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.713142 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.735926 10408 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0921 15:27:26.773142 10408 addons.go:414] enableAddons completed in 828.831417ms
I0921 15:27:26.776027 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:26.776040 10408 pod_ready.go:81] duration metric: took 456.205251ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.776049 10408 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117622 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.117632 10408 pod_ready.go:81] duration metric: took 341.577773ms waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117638 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518637 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.518650 10408 pod_ready.go:81] duration metric: took 401.006674ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518660 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918763 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.918778 10408 pod_ready.go:81] duration metric: took 400.10892ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918787 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318657 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.318670 10408 pod_ready.go:81] duration metric: took 399.877205ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318678 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720230 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.720243 10408 pod_ready.go:81] duration metric: took 401.55845ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720250 10408 pod_ready.go:38] duration metric: took 2.602830576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:28.720263 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:28.720316 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:28.729887 10408 api_server.go:71] duration metric: took 2.78564504s to wait for apiserver process to appear ...
I0921 15:27:28.729899 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:28.729905 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:28.733744 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:28.734313 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:28.734323 10408 api_server.go:130] duration metric: took 4.419338ms to wait for apiserver health ...
I0921 15:27:28.734328 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:28.920241 10408 system_pods.go:59] 7 kube-system pods found
I0921 15:27:28.920257 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:28.920261 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:28.920274 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:28.920279 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:28.920283 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:28.920286 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:28.920289 10408 system_pods.go:61] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:28.920294 10408 system_pods.go:74] duration metric: took 185.961163ms to wait for pod list to return data ...
I0921 15:27:28.920300 10408 default_sa.go:34] waiting for default service account to be created ...
I0921 15:27:29.119704 10408 default_sa.go:45] found service account: "default"
I0921 15:27:29.119720 10408 default_sa.go:55] duration metric: took 199.41576ms for default service account to be created ...
I0921 15:27:29.119727 10408 system_pods.go:116] waiting for k8s-apps to be running ...
I0921 15:27:29.322362 10408 system_pods.go:86] 7 kube-system pods found
I0921 15:27:29.322375 10408 system_pods.go:89] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:29.322379 10408 system_pods.go:89] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:29.322383 10408 system_pods.go:89] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:29.322388 10408 system_pods.go:89] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:29.322391 10408 system_pods.go:89] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:29.322395 10408 system_pods.go:89] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:29.322398 10408 system_pods.go:89] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:29.322402 10408 system_pods.go:126] duration metric: took 202.671392ms to wait for k8s-apps to be running ...
I0921 15:27:29.322407 10408 system_svc.go:44] waiting for kubelet service to be running ....
I0921 15:27:29.322452 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:29.331792 10408 system_svc.go:56] duration metric: took 9.381149ms WaitForService to wait for kubelet.
I0921 15:27:29.331804 10408 kubeadm.go:573] duration metric: took 3.387565971s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0921 15:27:29.331823 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:29.518084 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:29.518100 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:29.518105 10408 node_conditions.go:105] duration metric: took 186.278888ms to run NodePressure ...
I0921 15:27:29.518113 10408 start.go:216] waiting for startup goroutines ...
I0921 15:27:29.551427 10408 start.go:506] kubectl: 1.25.0, cluster: 1.25.2 (minor skew: 0)
I0921 15:27:29.611327 10408 out.go:177] * Done! kubectl is now configured to use "pause-20220921152522-3535" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220921152522-3535 -n pause-20220921152522-3535
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-20220921152522-3535 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220921152522-3535 logs -n 25: (2.727630141s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:20 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| start | -p | cert-expiration-20220921151821-3535 | jenkins | v1.27.0 | 21 Sep 22 15:22 PDT | 21 Sep 22 15:22 PDT |
| | cert-expiration-20220921151821-3535 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | cert-expiration-20220921151821-3535 | jenkins | v1.27.0 | 21 Sep 22 15:22 PDT | 21 Sep 22 15:22 PDT |
| | cert-expiration-20220921151821-3535 | | | | | |
| start | -p | stopped-upgrade-20220921152137-3535 | jenkins | v1.27.0 | 21 Sep 22 15:23 PDT | 21 Sep 22 15:24 PDT |
| | stopped-upgrade-20220921152137-3535 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=hyperkit | | | | | |
| start | -p | running-upgrade-20220921152233-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:25 PDT |
| | running-upgrade-20220921152233-3535 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=hyperkit | | | | | |
| delete | -p | stopped-upgrade-20220921152137-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:24 PDT |
| | stopped-upgrade-20220921152137-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | running-upgrade-20220921152233-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | running-upgrade-20220921152233-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-20220921152522-3535 | pause-20220921152522-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:26 PDT |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| delete | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| profile | list | minikube | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| profile | list --output=json | minikube | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| stop | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:26 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | 21 Sep 22 15:26 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p false-20220921151637-3535 | false-20220921151637-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | |
| | --memory=2048 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=5m --cni=false | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-20220921152522-3535 | pause-20220921152522-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | 21 Sep 22 15:27 PDT |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/09/21 15:26:16
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.19.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0921 15:26:16.412297 10408 out.go:296] Setting OutFile to fd 1 ...
I0921 15:26:16.412857 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.412883 10408 out.go:309] Setting ErrFile to fd 2...
I0921 15:26:16.412925 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.413172 10408 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
I0921 15:26:16.413935 10408 out.go:303] Setting JSON to false
I0921 15:26:16.429337 10408 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5147,"bootTime":1663794029,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0921 15:26:16.429439 10408 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0921 15:26:16.451061 10408 out.go:177] * [pause-20220921152522-3535] minikube v1.27.0 on Darwin 12.6
I0921 15:26:16.492895 10408 notify.go:214] Checking for updates...
I0921 15:26:16.513942 10408 out.go:177] - MINIKUBE_LOCATION=14995
I0921 15:26:16.535147 10408 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:26:16.555899 10408 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0921 15:26:16.577004 10408 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0921 15:26:16.598036 10408 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube
I0921 15:26:16.619232 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:16.619572 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.619620 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.626042 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52950
I0921 15:26:16.626541 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.626992 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.627004 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.627211 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.627372 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.627501 10408 driver.go:365] Setting default libvirt URI to qemu:///system
I0921 15:26:16.627783 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.627806 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.634000 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52952
I0921 15:26:16.634367 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.634679 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.634691 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.634960 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.635067 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.661930 10408 out.go:177] * Using the hyperkit driver based on existing profile
I0921 15:26:16.703890 10408 start.go:284] selected driver: hyperkit
I0921 15:26:16.703910 10408 start.go:808] validating driver "hyperkit" against &{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterNam
e:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.704025 10408 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0921 15:26:16.704092 10408 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.704203 10408 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0921 15:26:16.710571 10408 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.27.0
I0921 15:26:16.713621 10408 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.713649 10408 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0921 15:26:16.715630 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:16.715647 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:16.715664 10408 start_flags.go:316] config:
{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.715818 10408 iso.go:124] acquiring lock: {Name:mke8c57399926d29e846b47dd4be4625ba5fcaea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.774023 10408 out.go:177] * Starting control plane node pause-20220921152522-3535 in cluster pause-20220921152522-3535
I0921 15:26:14.112290 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
I0921 15:26:14.112374 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
I0921 15:26:14.112386 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
I0921 15:26:15.320346 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Attempt 3
I0921 15:26:15.320365 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:15.320474 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:15.321107 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Searching for 36:15:df:cc:5b:5b in /var/db/dhcpd_leases ...
I0921 15:26:15.321174 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found 28 entries in /var/db/dhcpd_leases!
I0921 15:26:15.321185 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:3e:7a:92:24:5:ce ID:1,3e:7a:92:24:5:ce Lease:0x632b8f7f}
I0921 15:26:15.321194 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:c2:90:21:6e:75:6 ID:1,c2:90:21:6e:75:6 Lease:0x632ce0da}
I0921 15:26:15.321202 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:9e:f3:b1:1c:9b:1c ID:1,9e:f3:b1:1c:9b:1c Lease:0x632b8f54}
I0921 15:26:15.321211 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:66:c5:83:6d:55:91 ID:1,66:c5:83:6d:55:91 Lease:0x632ce03b}
I0921 15:26:15.321220 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:ea:9c:f4:77:1d:3d ID:1,ea:9c:f4:77:1d:3d Lease:0x632ce076}
I0921 15:26:15.321227 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:36:e:45:14:25:55 ID:1,36:e:45:14:25:55 Lease:0x632cdfb6}
I0921 15:26:15.321236 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:92:2e:30:54:49:f3 ID:1,92:2e:30:54:49:f3 Lease:0x632b8de5}
I0921 15:26:15.321243 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:83:83:3:65:1a ID:1,1a:83:83:3:65:1a Lease:0x632cdf36}
I0921 15:26:15.321252 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:b6:1a:2d:8:65:c5 ID:1,b6:1a:2d:8:65:c5 Lease:0x632cdf16}
I0921 15:26:15.321259 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:72:4c:c8:cf:4f:63 ID:1,72:4c:c8:cf:4f:63 Lease:0x632b8dac}
I0921 15:26:15.321274 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c2:f8:ac:87:d9:f0 ID:1,c2:f8:ac:87:d9:f0 Lease:0x632b8d80}
I0921 15:26:15.321291 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:62:35:c1:26:64:c0 ID:1,62:35:c1:26:64:c0 Lease:0x632b8d81}
I0921 15:26:15.321303 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:96:24:b5:8e:13:fc ID:1,96:24:b5:8e:13:fc Lease:0x632cde86}
I0921 15:26:15.321315 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:e:f1:67:89:3f:e3 ID:1,e:f1:67:89:3f:e3 Lease:0x632cde14}
I0921 15:26:15.321324 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:a2:3d:49:78:3b:4c ID:1,a2:3d:49:78:3b:4c Lease:0x632cdd68}
I0921 15:26:15.321339 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:1a:dd:bc:c:73:c4 ID:1,1a:dd:bc:c:73:c4 Lease:0x632cdd35}
I0921 15:26:15.321350 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:52:e5:24:3b:ab:4 ID:1,52:e5:24:3b:ab:4 Lease:0x632b897b}
I0921 15:26:15.321358 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:b4:fe:f4:b1:24 ID:1,be:b4:fe:f4:b1:24 Lease:0x632b8bde}
I0921 15:26:15.321365 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:8a:c8:9b:80:80:10 ID:1,8a:c8:9b:80:80:10 Lease:0x632b8bdc}
I0921 15:26:15.321376 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:12:72:ad:9f:f1:8f ID:1,12:72:ad:9f:f1:8f Lease:0x632b8511}
I0921 15:26:15.321387 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:4a:58:20:58:21:84 ID:1,4a:58:20:58:21:84 Lease:0x632b84fc}
I0921 15:26:15.321395 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:4e:eb:64:20:d8:40 ID:1,4e:eb:64:20:d8:40 Lease:0x632b84d4}
I0921 15:26:15.321404 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:96:cb:c8:56:48:73 ID:1,96:cb:c8:56:48:73 Lease:0x632cd609}
I0921 15:26:15.321411 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:60:ad:7c:55:a0 ID:1,3e:60:ad:7c:55:a0 Lease:0x632cd5c9}
I0921 15:26:15.321418 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:2:7a:1a:6a:a6:1f ID:1,2:7a:1a:6a:a6:1f Lease:0x632b843f}
I0921 15:26:15.321426 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:9a:e7:f8:d0:27:5a ID:1,9a:e7:f8:d0:27:5a Lease:0x632cd449}
I0921 15:26:15.321434 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:12:80:14:fc:de:ba ID:1,12:80:14:fc:de:ba Lease:0x632b82be}
I0921 15:26:15.321440 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:56:cf:47:52:47:7e ID:1,56:cf:47:52:47:7e Lease:0x632b8281}
I0921 15:26:17.321647 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Attempt 4
I0921 15:26:17.321668 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:17.321761 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:17.322288 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Searching for 36:15:df:cc:5b:5b in /var/db/dhcpd_leases ...
I0921 15:26:17.322356 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found 29 entries in /var/db/dhcpd_leases!
I0921 15:26:17.322367 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:36:15:df:cc:5b:5b ID:1,36:15:df:cc:5b:5b Lease:0x632ce108}
I0921 15:26:17.322380 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found match: 36:15:df:cc:5b:5b
I0921 15:26:17.322390 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | IP: 192.168.64.30
I0921 15:26:17.322428 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetConfigRaw
I0921 15:26:17.322951 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:17.323049 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:17.323142 10389 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
I0921 15:26:17.323154 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:17.323221 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:17.323276 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:17.323815 10389 main.go:134] libmachine: Detecting operating system of created instance...
I0921 15:26:17.323821 10389 main.go:134] libmachine: Waiting for SSH to be available...
I0921 15:26:17.323832 10389 main.go:134] libmachine: Getting to WaitForSSH function...
I0921 15:26:17.323840 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:17.323909 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:17.323997 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:17.324070 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:17.324148 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:17.324242 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:17.324383 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:17.324389 10389 main.go:134] libmachine: About to run SSH command:
exit 0
I0921 15:26:16.794876 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:16.794956 10408 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
I0921 15:26:16.795012 10408 cache.go:57] Caching tarball of preloaded images
I0921 15:26:16.795122 10408 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0921 15:26:16.795144 10408 cache.go:60] Finished verifying existence of preloaded tar for v1.25.2 on docker
I0921 15:26:16.795239 10408 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/config.json ...
I0921 15:26:16.795594 10408 cache.go:208] Successfully downloaded all kic artifacts
I0921 15:26:16.795620 10408 start.go:364] acquiring machines lock for pause-20220921152522-3535: {Name:mk2f7774d81f069136708da9f7558413d7930511 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0921 15:26:19.803647 10408 start.go:368] acquired machines lock for "pause-20220921152522-3535" in 3.008011859s
I0921 15:26:19.803693 10408 start.go:96] Skipping create...Using existing machine configuration
I0921 15:26:19.803704 10408 fix.go:55] fixHost starting:
I0921 15:26:19.804014 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:19.804040 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:19.810489 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52975
I0921 15:26:19.810845 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:19.811156 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:19.811167 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:19.811357 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:19.811458 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.811557 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:26:19.811664 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:19.811739 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:26:19.812542 10408 fix.go:103] recreateIfNeeded on pause-20220921152522-3535: state=Running err=<nil>
W0921 15:26:19.812564 10408 fix.go:129] unexpected machine state, will restart: <nil>
I0921 15:26:19.835428 10408 out.go:177] * Updating the running hyperkit "pause-20220921152522-3535" VM ...
I0921 15:26:19.856170 10408 machine.go:88] provisioning docker machine ...
I0921 15:26:19.856192 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.856377 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856478 10408 buildroot.go:166] provisioning hostname "pause-20220921152522-3535"
I0921 15:26:19.856489 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856574 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.856646 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.856744 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856835 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856914 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.857028 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.857193 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.857203 10408 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220921152522-3535 && echo "pause-20220921152522-3535" | sudo tee /etc/hostname
I0921 15:26:19.929633 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220921152522-3535
I0921 15:26:19.929693 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.929883 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.930020 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930143 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930253 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.930438 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.930577 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.930595 10408 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220921152522-3535' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220921152522-3535/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220921152522-3535' | sudo tee -a /etc/hosts;
fi
fi
I0921 15:26:19.992780 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:19.992803 10408 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
I0921 15:26:19.992832 10408 buildroot.go:174] setting up certificates
I0921 15:26:19.992843 10408 provision.go:83] configureAuth start
I0921 15:26:19.992852 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.993017 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:19.993132 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.993213 10408 provision.go:138] copyHostCerts
I0921 15:26:19.993302 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
I0921 15:26:19.993310 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
I0921 15:26:19.993450 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
I0921 15:26:19.993643 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
I0921 15:26:19.993649 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
I0921 15:26:19.993780 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1679 bytes)
I0921 15:26:19.994087 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
I0921 15:26:19.994094 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
I0921 15:26:19.994203 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
I0921 15:26:19.994341 10408 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.pause-20220921152522-3535 san=[192.168.64.28 192.168.64.28 localhost 127.0.0.1 minikube pause-20220921152522-3535]
I0921 15:26:20.145157 10408 provision.go:172] copyRemoteCerts
I0921 15:26:20.145229 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0921 15:26:20.145247 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.145395 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.145492 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.145591 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.145687 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.181860 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0921 15:26:20.204288 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0921 15:26:20.223046 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0921 15:26:20.242859 10408 provision.go:86] duration metric: configureAuth took 250.000259ms
I0921 15:26:20.242872 10408 buildroot.go:189] setting minikube options for container-runtime
I0921 15:26:20.243031 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:20.243050 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.243218 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.243320 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.243440 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243555 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243661 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.243798 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.243914 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.243922 10408 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0921 15:26:20.307004 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0921 15:26:20.307030 10408 buildroot.go:70] root file system type: tmpfs
I0921 15:26:20.307188 10408 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0921 15:26:20.307206 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.307379 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.307501 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307587 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307679 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.307823 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.307954 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.308011 10408 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0921 15:26:20.380017 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0921 15:26:20.380044 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.380193 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.380302 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380410 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380514 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.380665 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.380781 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.380797 10408 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0921 15:26:20.447616 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:20.447629 10408 machine.go:91] provisioned docker machine in 591.445478ms
I0921 15:26:20.447641 10408 start.go:300] post-start starting for "pause-20220921152522-3535" (driver="hyperkit")
I0921 15:26:20.447646 10408 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0921 15:26:20.447659 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.447885 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0921 15:26:20.447901 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.448051 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.448156 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.448291 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.448405 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.484862 10408 ssh_runner.go:195] Run: cat /etc/os-release
I0921 15:26:20.487726 10408 info.go:137] Remote host: Buildroot 2021.02.12
I0921 15:26:20.487742 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
I0921 15:26:20.487867 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
I0921 15:26:20.488046 10408 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem -> 35352.pem in /etc/ssl/certs
I0921 15:26:20.488202 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0921 15:26:20.495074 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:20.515167 10408 start.go:303] post-start completed in 67.502258ms
I0921 15:26:20.515187 10408 fix.go:57] fixHost completed within 711.484594ms
I0921 15:26:20.515203 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.515368 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.515520 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515638 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515770 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.515941 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.516053 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.516063 10408 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0921 15:26:20.577712 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: 1663799180.686854068
I0921 15:26:20.577735 10408 fix.go:207] guest clock: 1663799180.686854068
I0921 15:26:20.577746 10408 fix.go:220] Guest: 2022-09-21 15:26:20.686854068 -0700 PDT Remote: 2022-09-21 15:26:20.51519 -0700 PDT m=+4.146234536 (delta=171.664068ms)
I0921 15:26:20.577765 10408 fix.go:191] guest clock delta is within tolerance: 171.664068ms
I0921 15:26:20.577770 10408 start.go:83] releasing machines lock for "pause-20220921152522-3535", held for 774.111447ms
I0921 15:26:20.577789 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.577928 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:20.578042 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578174 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578318 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578705 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578906 10408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0921 15:26:20.578961 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.578984 10408 ssh_runner.go:195] Run: systemctl --version
I0921 15:26:20.578999 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.579066 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579106 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579182 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579228 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579290 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579338 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579415 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.579448 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.650058 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:20.650150 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:20.668593 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:20.668610 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:20.668676 10408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0921 15:26:20.679656 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0921 15:26:20.692651 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:20.702013 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0921 15:26:20.715942 10408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0921 15:26:20.844184 10408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0921 15:26:20.974988 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:21.117162 10408 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:18.404949 10389 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:18.404961 10389 main.go:134] libmachine: Detecting the provisioner...
I0921 15:26:18.404967 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.405102 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.405195 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.405274 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.405369 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.405482 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.405601 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.405610 10389 main.go:134] libmachine: About to run SSH command:
cat /etc/os-release
I0921 15:26:18.483176 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g1be7c81-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0921 15:26:18.483226 10389 main.go:134] libmachine: found compatible host: buildroot
I0921 15:26:18.483233 10389 main.go:134] libmachine: Provisioning with buildroot...
I0921 15:26:18.483245 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.483380 10389 buildroot.go:166] provisioning hostname "false-20220921151637-3535"
I0921 15:26:18.483392 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.483485 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.483579 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.483675 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.483757 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.483857 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.483983 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.484098 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.484107 10389 main.go:134] libmachine: About to run SSH command:
sudo hostname false-20220921151637-3535 && echo "false-20220921151637-3535" | sudo tee /etc/hostname
I0921 15:26:18.570488 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: false-20220921151637-3535
I0921 15:26:18.570510 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.570653 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.570761 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.570862 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.570935 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.571055 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.571174 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.571186 10389 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sfalse-20220921151637-3535' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220921151637-3535/g' /etc/hosts;
else
echo '127.0.1.1 false-20220921151637-3535' | sudo tee -a /etc/hosts;
fi
fi
I0921 15:26:18.653580 10389 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:18.653600 10389 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
I0921 15:26:18.653620 10389 buildroot.go:174] setting up certificates
I0921 15:26:18.653630 10389 provision.go:83] configureAuth start
I0921 15:26:18.653637 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.653765 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:18.653853 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.653932 10389 provision.go:138] copyHostCerts
I0921 15:26:18.654006 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
I0921 15:26:18.654013 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
I0921 15:26:18.654127 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
I0921 15:26:18.654316 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
I0921 15:26:18.654322 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
I0921 15:26:18.654389 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
I0921 15:26:18.654553 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
I0921 15:26:18.654559 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
I0921 15:26:18.654614 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1679 bytes)
I0921 15:26:18.654728 10389 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.false-20220921151637-3535 san=[192.168.64.30 192.168.64.30 localhost 127.0.0.1 minikube false-20220921151637-3535]
I0921 15:26:18.931086 10389 provision.go:172] copyRemoteCerts
I0921 15:26:18.931145 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0921 15:26:18.931162 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.931342 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.931454 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.931547 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.931640 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:18.977451 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0921 15:26:18.993393 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0921 15:26:19.009261 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0921 15:26:19.024820 10389 provision.go:86] duration metric: configureAuth took 371.177848ms
I0921 15:26:19.024832 10389 buildroot.go:189] setting minikube options for container-runtime
I0921 15:26:19.024951 10389 config.go:180] Loaded profile config "false-20220921151637-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:19.024965 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.025081 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.025169 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.025260 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.025332 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.025427 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.025536 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.025635 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.025643 10389 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0921 15:26:19.103232 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0921 15:26:19.103245 10389 buildroot.go:70] root file system type: tmpfs
I0921 15:26:19.103367 10389 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0921 15:26:19.103382 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.103506 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.103596 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.103680 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.103774 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.103895 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.103995 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.104045 10389 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0921 15:26:19.189517 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0921 15:26:19.189540 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.189677 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.189768 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.189857 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.189943 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.190071 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.190182 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.190195 10389 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0921 15:26:19.657263 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0921 15:26:19.657285 10389 main.go:134] libmachine: Checking connection to Docker...
I0921 15:26:19.657293 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetURL
I0921 15:26:19.657424 10389 main.go:134] libmachine: Docker is up and running!
I0921 15:26:19.657433 10389 main.go:134] libmachine: Reticulating splines...
I0921 15:26:19.657441 10389 client.go:171] LocalClient.Create took 10.876166724s
I0921 15:26:19.657453 10389 start.go:167] duration metric: libmachine.API.Create for "false-20220921151637-3535" took 10.876232302s
I0921 15:26:19.657465 10389 start.go:300] post-start starting for "false-20220921151637-3535" (driver="hyperkit")
I0921 15:26:19.657470 10389 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0921 15:26:19.657481 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.657606 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0921 15:26:19.657623 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.657718 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.657815 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.657900 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.657993 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.701002 10389 ssh_runner.go:195] Run: cat /etc/os-release
I0921 15:26:19.703660 10389 info.go:137] Remote host: Buildroot 2021.02.12
I0921 15:26:19.703675 10389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
I0921 15:26:19.703763 10389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
I0921 15:26:19.703898 10389 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem -> 35352.pem in /etc/ssl/certs
I0921 15:26:19.704044 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0921 15:26:19.710387 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:19.725495 10389 start.go:303] post-start completed in 68.018939ms
I0921 15:26:19.725521 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetConfigRaw
I0921 15:26:19.726077 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:19.726225 10389 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/config.json ...
I0921 15:26:19.726508 10389 start.go:128] duration metric: createHost completed in 10.995583539s
I0921 15:26:19.726524 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.726609 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.726688 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.726756 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.726824 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.726940 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.727032 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.727039 10389 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0921 15:26:19.803566 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: 1663799179.904471962
I0921 15:26:19.803578 10389 fix.go:207] guest clock: 1663799179.904471962
I0921 15:26:19.803583 10389 fix.go:220] Guest: 2022-09-21 15:26:19.904471962 -0700 PDT Remote: 2022-09-21 15:26:19.726515 -0700 PDT m=+11.397811697 (delta=177.956962ms)
I0921 15:26:19.803600 10389 fix.go:191] guest clock delta is within tolerance: 177.956962ms
I0921 15:26:19.803604 10389 start.go:83] releasing machines lock for "false-20220921151637-3535", held for 11.072844405s
I0921 15:26:19.803620 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.803781 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:19.803886 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.803980 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804107 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804405 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804511 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804569 10389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0921 15:26:19.804599 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.804676 10389 ssh_runner.go:195] Run: systemctl --version
I0921 15:26:19.804691 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.804696 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.804788 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.804809 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.804910 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.804933 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.804984 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.805022 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.805139 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.847227 10389 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:19.847314 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:19.886987 10389 docker.go:611] Got preloaded images:
I0921 15:26:19.887002 10389 docker.go:617] registry.k8s.io/kube-apiserver:v1.25.2 wasn't preloaded
I0921 15:26:19.887058 10389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0921 15:26:19.893540 10389 ssh_runner.go:195] Run: which lz4
I0921 15:26:19.895930 10389 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0921 15:26:19.898413 10389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0921 15:26:19.898432 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404136294 bytes)
I0921 15:26:21.239426 10389 docker.go:576] Took 1.343526 seconds to copy over tarball
I0921 15:26:21.239490 10389 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0921 15:26:24.582087 10389 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342576242s)
I0921 15:26:24.582101 10389 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0921 15:26:24.608006 10389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0921 15:26:24.614121 10389 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0921 15:26:24.625086 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:24.705194 10389 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:25.931663 10389 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.226446575s)
I0921 15:26:25.931758 10389 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0921 15:26:25.941064 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0921 15:26:25.952201 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:25.960686 10389 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0921 15:26:25.983070 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:25.991760 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0921 15:26:26.004137 10389 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0921 15:26:26.084992 10389 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0921 15:26:26.179551 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:26.278839 10389 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:27.498830 10389 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.219969179s)
I0921 15:26:27.498903 10389 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0921 15:26:27.582227 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:27.670077 10389 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0921 15:26:27.680350 10389 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0921 15:26:27.680426 10389 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0921 15:26:27.684229 10389 start.go:471] Will wait 60s for crictl version
I0921 15:26:27.684283 10389 ssh_runner.go:195] Run: sudo crictl version
I0921 15:26:27.710285 10389 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.18
RuntimeApiVersion: 1.41.0
I0921 15:26:27.710350 10389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:27.730543 10389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:27.776346 10389 out.go:204] * Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
I0921 15:26:27.776499 10389 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0921 15:26:27.779532 10389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0921 15:26:27.786983 10389 localpath.go:92] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/client.crt -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.crt
I0921 15:26:27.787207 10389 localpath.go:117] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/client.key -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.key
I0921 15:26:27.787377 10389 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:27.787423 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:27.803222 10389 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:27.803238 10389 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:27.803305 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:27.818382 10389 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:27.818399 10389 cache_images.go:84] Images are preloaded, skipping loading
I0921 15:26:27.818461 10389 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0921 15:26:27.839813 10389 cni.go:95] Creating CNI manager for "false"
I0921 15:26:27.839834 10389 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0921 15:26:27.839848 10389 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.30 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220921151637-3535 NodeName:false-20220921151637-3535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.30 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0921 15:26:27.839927 10389 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.30
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "false-20220921151637-3535"
kubeletExtraArgs:
node-ip: 192.168.64.30
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.30"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0921 15:26:27.839993 10389 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=false-20220921151637-3535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.30 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.2 ClusterName:false-20220921151637-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
I0921 15:26:27.840044 10389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
I0921 15:26:27.846485 10389 binaries.go:44] Found k8s binaries, skipping transfer
I0921 15:26:27.846528 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0921 15:26:27.852711 10389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0921 15:26:27.863719 10389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0921 15:26:27.874539 10389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0921 15:26:27.885620 10389 ssh_runner.go:195] Run: grep 192.168.64.30 control-plane.minikube.internal$ /etc/hosts
I0921 15:26:27.887836 10389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.30 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0921 15:26:27.895111 10389 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535 for IP: 192.168.64.30
I0921 15:26:27.895206 10389 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
I0921 15:26:27.895255 10389 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
I0921 15:26:27.895337 10389 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.key
I0921 15:26:27.895361 10389 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b
I0921 15:26:27.895377 10389 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b with IP's: [192.168.64.30 10.96.0.1 127.0.0.1 10.0.0.1]
I0921 15:26:28.090626 10389 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b ...
I0921 15:26:28.090639 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b: {Name:mkd0021f0880c17472bc34f2bb7b8af87d7a861d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.090958 10389 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b ...
I0921 15:26:28.090971 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b: {Name:mk0105b4976084bcdc477e16d22340c1f19a3c15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.091184 10389 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt
I0921 15:26:28.091356 10389 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key
I0921 15:26:28.091534 10389 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key
I0921 15:26:28.091547 10389 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt with IP's: []
I0921 15:26:28.128749 10389 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt ...
I0921 15:26:28.128759 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt: {Name:mkb235bcbbe39e8b7fc7fa2af71bd625a04514fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.129197 10389 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key ...
I0921 15:26:28.129204 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key: {Name:mkc7b1d50dce94488cf946b55e321c2fd8195b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.129644 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem (1338 bytes)
W0921 15:26:28.129681 10389 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535_empty.pem, impossibly tiny 0 bytes
I0921 15:26:28.129689 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
I0921 15:26:28.129738 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
I0921 15:26:28.129767 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
I0921 15:26:28.129794 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1679 bytes)
I0921 15:26:28.129854 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:28.130421 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0921 15:26:28.147670 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0921 15:26:28.163433 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0921 15:26:28.178707 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0921 15:26:28.193799 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0921 15:26:28.208841 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0921 15:26:28.224170 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0921 15:26:28.239235 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0921 15:26:28.254997 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0921 15:26:28.270476 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem --> /usr/share/ca-certificates/3535.pem (1338 bytes)
I0921 15:26:28.285761 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /usr/share/ca-certificates/35352.pem (1708 bytes)
I0921 15:26:28.300863 10389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0921 15:26:28.311541 10389 ssh_runner.go:195] Run: openssl version
I0921 15:26:28.314918 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35352.pem && ln -fs /usr/share/ca-certificates/35352.pem /etc/ssl/certs/35352.pem"
I0921 15:26:28.322006 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35352.pem
I0921 15:26:28.324825 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:31 /usr/share/ca-certificates/35352.pem
I0921 15:26:28.324854 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35352.pem
I0921 15:26:28.328317 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35352.pem /etc/ssl/certs/3ec20f2e.0"
I0921 15:26:28.335399 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0921 15:26:28.342321 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.345213 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:27 /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.345248 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.348680 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0921 15:26:28.355668 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3535.pem && ln -fs /usr/share/ca-certificates/3535.pem /etc/ssl/certs/3535.pem"
I0921 15:26:28.362704 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3535.pem
I0921 15:26:28.365564 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:31 /usr/share/ca-certificates/3535.pem
I0921 15:26:28.365597 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3535.pem
I0921 15:26:28.369054 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3535.pem /etc/ssl/certs/51391683.0"
I0921 15:26:28.375971 10389 kubeadm.go:396] StartCluster: {Name:false-20220921151637-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:false-20220921151637
-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.30 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:28.393673 10389 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:26:28.410852 10389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0921 15:26:28.417363 10389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0921 15:26:28.423501 10389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0921 15:26:28.429757 10389 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0921 15:26:28.429778 10389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0921 15:26:28.485563 10389 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
I0921 15:26:28.485628 10389 kubeadm.go:317] [preflight] Running pre-flight checks
I0921 15:26:28.613102 10389 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0921 15:26:28.613192 10389 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0921 15:26:28.613262 10389 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0921 15:26:28.713134 10389 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0921 15:26:29.173173 10408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.055980768s)
I0921 15:26:29.173240 10408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0921 15:26:29.288535 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:29.417731 10408 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0921 15:26:29.433270 10408 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0921 15:26:29.433356 10408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0921 15:26:29.447293 10408 start.go:471] Will wait 60s for crictl version
I0921 15:26:29.447353 10408 ssh_runner.go:195] Run: sudo crictl version
I0921 15:26:29.482799 10408 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.18
RuntimeApiVersion: 1.41.0
I0921 15:26:29.482858 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.651357 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.808439 10408 out.go:204] * Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
I0921 15:26:29.808534 10408 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0921 15:26:29.818111 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:29.818177 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.873620 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.873633 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:29.873699 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.929931 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.929952 10408 cache_images.go:84] Images are preloaded, skipping loading
I0921 15:26:29.930056 10408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0921 15:26:30.064287 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:30.064305 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:30.064320 10408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0921 15:26:30.064331 10408 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.28 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220921152522-3535 NodeName:pause-20220921152522-3535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.28 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0921 15:26:30.064423 10408 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.28
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220921152522-3535"
kubeletExtraArgs:
node-ip: 192.168.64.28
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.28"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0921 15:26:30.064505 10408 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220921152522-3535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.28 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0921 15:26:30.064579 10408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
I0921 15:26:30.076550 10408 binaries.go:44] Found k8s binaries, skipping transfer
I0921 15:26:30.076638 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0921 15:26:30.090012 10408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0921 15:26:30.137803 10408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0921 15:26:30.178146 10408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0921 15:26:30.203255 10408 ssh_runner.go:195] Run: grep 192.168.64.28 control-plane.minikube.internal$ /etc/hosts
I0921 15:26:30.209779 10408 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535 for IP: 192.168.64.28
I0921 15:26:30.209879 10408 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
I0921 15:26:30.209934 10408 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
I0921 15:26:30.210019 10408 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.key
I0921 15:26:30.210082 10408 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key.6733b561
I0921 15:26:30.210133 10408 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key
I0921 15:26:30.210333 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem (1338 bytes)
W0921 15:26:30.210375 10408 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535_empty.pem, impossibly tiny 0 bytes
I0921 15:26:30.210388 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
I0921 15:26:30.210421 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
I0921 15:26:30.210453 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
I0921 15:26:30.210483 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1679 bytes)
I0921 15:26:30.210550 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:30.211086 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0921 15:26:30.279069 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0921 15:26:30.343250 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0921 15:26:30.413180 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0921 15:26:30.448798 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0921 15:26:30.476175 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0921 15:26:30.497204 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0921 15:26:30.524103 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0921 15:26:30.558966 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /usr/share/ca-certificates/35352.pem (1708 bytes)
I0921 15:26:30.576319 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0921 15:26:30.592912 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem --> /usr/share/ca-certificates/3535.pem (1338 bytes)
I0921 15:26:30.609099 10408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0921 15:26:30.627179 10408 ssh_runner.go:195] Run: openssl version
I0921 15:26:30.632801 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3535.pem && ln -fs /usr/share/ca-certificates/3535.pem /etc/ssl/certs/3535.pem"
I0921 15:26:30.641473 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645794 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:31 /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645836 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3535.pem
I0921 15:26:30.649794 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3535.pem /etc/ssl/certs/51391683.0"
I0921 15:26:30.657630 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35352.pem && ln -fs /usr/share/ca-certificates/35352.pem /etc/ssl/certs/35352.pem"
I0921 15:26:30.665747 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669804 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:31 /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669850 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35352.pem
I0921 15:26:30.679638 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35352.pem /etc/ssl/certs/3ec20f2e.0"
I0921 15:26:30.700907 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0921 15:26:30.734369 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762750 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:27 /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762827 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.777627 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0921 15:26:30.785856 10408 kubeadm.go:396] StartCluster: {Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522
-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:30.785963 10408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:26:30.816264 10408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0921 15:26:30.823179 10408 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0921 15:26:30.823195 10408 kubeadm.go:627] restartCluster start
I0921 15:26:30.823236 10408 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0921 15:26:30.837045 10408 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0921 15:26:30.837457 10408 kubeconfig.go:92] found "pause-20220921152522-3535" server: "https://192.168.64.28:8443"
I0921 15:26:30.837839 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:26:30.838375 10408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0921 15:26:30.852535 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:30.852588 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:30.868059 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:30.876185 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:30.876238 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:30.912452 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:30.912472 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:28.751035 10389 out.go:204] - Generating certificates and keys ...
I0921 15:26:28.751152 10389 kubeadm.go:317] [certs] Using existing ca certificate authority
I0921 15:26:28.751236 10389 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0921 15:26:28.782482 10389 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0921 15:26:29.137189 10389 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0921 15:26:29.241745 10389 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0921 15:26:29.350166 10389 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0921 15:26:29.505698 10389 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0921 15:26:29.505932 10389 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [false-20220921151637-3535 localhost] and IPs [192.168.64.30 127.0.0.1 ::1]
I0921 15:26:29.604706 10389 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0921 15:26:29.604909 10389 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [false-20220921151637-3535 localhost] and IPs [192.168.64.30 127.0.0.1 ::1]
I0921 15:26:29.834088 10389 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0921 15:26:29.943628 10389 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0921 15:26:30.177452 10389 kubeadm.go:317] [certs] Generating "sa" key and public key
I0921 15:26:30.177562 10389 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0921 15:26:30.679764 10389 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0921 15:26:30.762950 10389 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0921 15:26:30.975611 10389 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0921 15:26:31.368343 10389 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0921 15:26:31.380985 10389 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0921 15:26:31.381763 10389 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0921 15:26:31.381810 10389 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0921 15:26:31.468060 10389 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0921 15:26:31.487973 10389 out.go:204] - Booting up control plane ...
I0921 15:26:31.488058 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0921 15:26:31.488140 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0921 15:26:31.488216 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0921 15:26:31.488288 10389 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0921 15:26:31.488408 10389 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0921 15:26:35.914013 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:35.914061 10408 retry.go:31] will retry after 263.082536ms: state is "Stopped"
I0921 15:26:36.179260 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:41.180983 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:41.181007 10408 retry.go:31] will retry after 381.329545ms: state is "Stopped"
I0921 15:26:43.469751 10389 kubeadm.go:317] [apiclient] All control plane components are healthy after 12.003918 seconds
I0921 15:26:43.469852 10389 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0921 15:26:43.477591 10389 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0921 15:26:44.989240 10389 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I0921 15:26:44.989436 10389 kubeadm.go:317] [mark-control-plane] Marking the node false-20220921151637-3535 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0921 15:26:45.496387 10389 kubeadm.go:317] [bootstrap-token] Using token: gw23ty.315hs4knjisv0ijr
I0921 15:26:41.563913 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:45.534959 10389 out.go:204] - Configuring RBAC rules ...
I0921 15:26:45.535164 10389 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0921 15:26:45.535348 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0921 15:26:45.575312 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0921 15:26:45.577832 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0921 15:26:45.580659 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0921 15:26:45.582707 10389 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0921 15:26:45.589329 10389 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0921 15:26:45.765645 10389 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I0921 15:26:45.903347 10389 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I0921 15:26:45.903987 10389 kubeadm.go:317]
I0921 15:26:45.904052 10389 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I0921 15:26:45.904063 10389 kubeadm.go:317]
I0921 15:26:45.904125 10389 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I0921 15:26:45.904133 10389 kubeadm.go:317]
I0921 15:26:45.904151 10389 kubeadm.go:317] mkdir -p $HOME/.kube
I0921 15:26:45.904270 10389 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0921 15:26:45.904382 10389 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0921 15:26:45.904399 10389 kubeadm.go:317]
I0921 15:26:45.904507 10389 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I0921 15:26:45.904518 10389 kubeadm.go:317]
I0921 15:26:45.904599 10389 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I0921 15:26:45.904608 10389 kubeadm.go:317]
I0921 15:26:45.904652 10389 kubeadm.go:317] You should now deploy a pod network to the cluster.
I0921 15:26:45.904743 10389 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0921 15:26:45.904821 10389 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0921 15:26:45.904853 10389 kubeadm.go:317]
I0921 15:26:45.904929 10389 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I0921 15:26:45.905009 10389 kubeadm.go:317] and service account keys on each node and then running the following as root:
I0921 15:26:45.905013 10389 kubeadm.go:317]
I0921 15:26:45.905081 10389 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token gw23ty.315hs4knjisv0ijr \
I0921 15:26:45.905165 10389 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:706daf9048108456ab2312c550f8f0627aeca112971c3da5a874015a0cee155c \
I0921 15:26:45.905182 10389 kubeadm.go:317] --control-plane
I0921 15:26:45.905187 10389 kubeadm.go:317]
I0921 15:26:45.905254 10389 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I0921 15:26:45.905261 10389 kubeadm.go:317]
I0921 15:26:45.905329 10389 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token gw23ty.315hs4knjisv0ijr \
I0921 15:26:45.905405 10389 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:706daf9048108456ab2312c550f8f0627aeca112971c3da5a874015a0cee155c
I0921 15:26:45.906103 10389 kubeadm.go:317] W0921 22:26:28.588830 1256 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0921 15:26:45.906192 10389 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0921 15:26:45.906207 10389 cni.go:95] Creating CNI manager for "false"
I0921 15:26:45.906225 10389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0921 15:26:45.906290 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:45.906301 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=false-20220921151637-3535 minikube.k8s.io/updated_at=2022_09_21T15_26_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.087744 10389 ops.go:34] apiserver oom_adj: -16
I0921 15:26:46.087768 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.661358 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:47.163233 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:47.661991 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:48.162015 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.564586 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:46.766257 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:46.766358 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:46.776615 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:46.782756 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:46.782801 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:46.789298 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:46.789309 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.288815 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": read tcp 192.168.64.1:52998->192.168.64.28:8443: read: connection reset by peer
I0921 15:26:51.288848 10408 retry.go:31] will retry after 242.214273ms: state is "Stopped"
I0921 15:26:48.662979 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:49.163023 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:49.662057 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:50.162176 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:50.663300 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.162051 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.661237 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:52.161318 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:52.663231 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:53.162177 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.532207 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.632400 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:51.632425 10408 retry.go:31] will retry after 300.724609ms: state is "Stopped"
I0921 15:26:51.934415 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.035144 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.035176 10408 retry.go:31] will retry after 427.113882ms: state is "Stopped"
I0921 15:26:52.464328 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.566391 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.566426 10408 retry.go:31] will retry after 382.2356ms: state is "Stopped"
I0921 15:26:52.948987 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.049570 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.049605 10408 retry.go:31] will retry after 505.529557ms: state is "Stopped"
I0921 15:26:53.556334 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.658245 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.658268 10408 retry.go:31] will retry after 609.195524ms: state is "Stopped"
I0921 15:26:54.269593 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:54.371296 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:54.371340 10408 retry.go:31] will retry after 858.741692ms: state is "Stopped"
I0921 15:26:55.230116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:55.331214 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:55.331251 10408 retry.go:31] will retry after 1.201160326s: state is "Stopped"
I0921 15:26:53.661186 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:54.163293 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:54.661188 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:55.161203 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:55.661768 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:56.161278 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:56.661209 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:57.161293 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:57.227024 10389 kubeadm.go:1067] duration metric: took 11.320770189s to wait for elevateKubeSystemPrivileges.
I0921 15:26:57.227047 10389 kubeadm.go:398] StartCluster complete in 28.851048117s
I0921 15:26:57.227062 10389 settings.go:142] acquiring lock: {Name:mkb00f1de0b91d8f67bd982eab088d27845674b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:57.227132 10389 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:26:57.227768 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mka2f83e1cbd4124ff7179732fbb172d977cf2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:57.740783 10389 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220921151637-3535" rescaled to 1
I0921 15:26:57.740812 10389 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.64.30 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0921 15:26:57.740821 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0921 15:26:57.740854 10389 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0921 15:26:57.740962 10389 config.go:180] Loaded profile config "false-20220921151637-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:57.786566 10389 addons.go:65] Setting storage-provisioner=true in profile "false-20220921151637-3535"
I0921 15:26:57.786585 10389 addons.go:153] Setting addon storage-provisioner=true in "false-20220921151637-3535"
I0921 15:26:57.786585 10389 addons.go:65] Setting default-storageclass=true in profile "false-20220921151637-3535"
I0921 15:26:57.786492 10389 out.go:177] * Verifying Kubernetes components...
W0921 15:26:57.786593 10389 addons.go:162] addon storage-provisioner should already be in state true
I0921 15:26:57.786605 10389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220921151637-3535"
I0921 15:26:57.786637 10389 host.go:66] Checking if "false-20220921151637-3535" exists ...
I0921 15:26:57.823578 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:26:57.824055 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.824059 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.824098 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.824128 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.831913 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53008
I0921 15:26:57.831981 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53009
I0921 15:26:57.832340 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.832352 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.832684 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.832694 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.832700 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.832713 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.832896 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.832944 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.832993 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.833084 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.833170 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.833345 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.833360 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.839848 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53012
I0921 15:26:57.840218 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.840571 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.840590 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.840793 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.840888 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.840964 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.841057 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.841584 10389 addons.go:153] Setting addon default-storageclass=true in "false-20220921151637-3535"
W0921 15:26:57.841596 10389 addons.go:162] addon default-storageclass should already be in state true
I0921 15:26:57.841612 10389 host.go:66] Checking if "false-20220921151637-3535" exists ...
I0921 15:26:57.841859 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.841874 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.841903 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:57.848370 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53014
I0921 15:26:57.879837 10389 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0921 15:26:57.853392 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0921 15:26:57.856801 10389 node_ready.go:35] waiting up to 5m0s for node "false-20220921151637-3535" to be "Ready" ...
I0921 15:26:57.880708 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.901652 10389 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:26:57.901674 10389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0921 15:26:57.901717 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:57.902040 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:57.902220 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.902228 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:57.902244 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.902481 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:57.902678 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.902711 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:57.903323 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.903348 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.907923 10389 node_ready.go:49] node "false-20220921151637-3535" has status "Ready":"True"
I0921 15:26:57.907937 10389 node_ready.go:38] duration metric: took 6.436476ms waiting for node "false-20220921151637-3535" to be "Ready" ...
I0921 15:26:57.907943 10389 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:26:57.910202 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53017
I0921 15:26:57.910546 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.910873 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.910889 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.911076 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.911170 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.911256 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.911338 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.912159 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:57.912315 10389 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0921 15:26:57.912323 10389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0921 15:26:57.912331 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:57.912418 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:57.912497 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:57.912584 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:57.912659 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:57.919652 10389 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-pns2v" in "kube-system" namespace to be "Ready" ...
I0921 15:26:58.008677 10389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0921 15:26:58.015955 10389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:26:59.137018 10389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.235523727s)
I0921 15:26:59.137048 10389 start.go:810] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS
I0921 15:26:59.214166 10389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.198193011s)
I0921 15:26:59.214197 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214212 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214261 10389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.205563718s)
I0921 15:26:59.214276 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214283 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214398 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.214419 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.214438 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214449 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214452 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214458 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214464 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214465 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214473 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214483 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214582 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214593 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214605 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214615 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214655 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214663 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214784 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214810 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214847 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.257530 10389 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0921 15:26:56.533116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:56.635643 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:56.635670 10408 retry.go:31] will retry after 1.723796097s: state is "Stopped"
I0921 15:26:58.359704 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:58.461478 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:58.461505 10408 retry.go:31] will retry after 1.596532639s: state is "Stopped"
I0921 15:27:00.059136 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:00.159945 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:27:00.159971 10408 api_server.go:165] Checking apiserver status ...
I0921 15:27:00.160018 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0921 15:27:00.169632 10408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0921 15:27:00.169647 10408 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I0921 15:27:00.169656 10408 kubeadm.go:1114] stopping kube-system containers ...
I0921 15:27:00.169722 10408 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:27:00.201882 10408 docker.go:443] Stopping containers: [d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49]
I0921 15:27:00.201952 10408 ssh_runner.go:195] Run: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49
I0921 15:26:59.279382 10389 addons.go:414] enableAddons completed in 1.538525769s
I0921 15:26:59.940505 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:02.438511 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:05.344188 10408 ssh_runner.go:235] Completed: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49: (5.142213633s)
I0921 15:27:05.344244 10408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0921 15:27:05.419551 10408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0921 15:27:05.433375 10408 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Sep 21 22:25 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Sep 21 22:25 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2039 Sep 21 22:25 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Sep 21 22:25 /etc/kubernetes/scheduler.conf
I0921 15:27:05.433432 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0921 15:27:05.439704 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0921 15:27:05.445874 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0921 15:27:05.453215 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.453270 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0921 15:27:05.459417 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0921 15:27:05.465309 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.465358 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0921 15:27:05.476008 10408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484410 10408 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484426 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:05.534434 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:04.440960 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:06.941172 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:06.469884 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.628867 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.698897 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.759299 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:06.759353 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:06.778540 10408 api_server.go:71] duration metric: took 19.241402ms to wait for apiserver process to appear ...
I0921 15:27:06.778552 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:06.778559 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:09.441803 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:09.938218 10389 pod_ready.go:97] error getting pod "coredns-565d847f94-pns2v" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-pns2v" not found
I0921 15:27:09.938237 10389 pod_ready.go:81] duration metric: took 12.018553938s waiting for pod "coredns-565d847f94-pns2v" in "kube-system" namespace to be "Ready" ...
E0921 15:27:09.938247 10389 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-565d847f94-pns2v" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-pns2v" not found
I0921 15:27:09.938253 10389 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-wwhtk" in "kube-system" namespace to be "Ready" ...
I0921 15:27:11.950940 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:11.780440 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:27:12.280518 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.000183 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0921 15:27:14.000198 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0921 15:27:14.282668 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.289281 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.289293 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:14.780762 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.786529 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.786540 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:15.280930 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:15.288106 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:15.292969 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:15.292981 10408 api_server.go:130] duration metric: took 8.514415313s to wait for apiserver health ...
I0921 15:27:15.292986 10408 cni.go:95] Creating CNI manager for ""
I0921 15:27:15.292994 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:27:15.293004 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:15.298309 10408 system_pods.go:59] 6 kube-system pods found
I0921 15:27:15.298324 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:15.298330 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0921 15:27:15.298335 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0921 15:27:15.298340 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0921 15:27:15.298344 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:15.298348 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0921 15:27:15.298352 10408 system_pods.go:74] duration metric: took 5.344262ms to wait for pod list to return data ...
I0921 15:27:15.298357 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:15.300304 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:15.300319 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:15.300328 10408 node_conditions.go:105] duration metric: took 1.967816ms to run NodePressure ...
I0921 15:27:15.300342 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:15.402185 10408 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0921 15:27:15.405062 10408 kubeadm.go:778] kubelet initialised
I0921 15:27:15.405072 10408 kubeadm.go:779] duration metric: took 2.873657ms waiting for restarted kubelet to initialise ...
I0921 15:27:15.405080 10408 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:15.408132 10408 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411452 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:15.411459 10408 pod_ready.go:81] duration metric: took 3.317632ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411465 10408 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:14.445892 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:16.945831 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:17.420289 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:19.421503 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:18.946719 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:20.947256 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:22.950309 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:21.919889 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:24.419226 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:25.920028 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.920043 10408 pod_ready.go:81] duration metric: took 10.508561161s waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.920049 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923063 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.923071 10408 pod_ready.go:81] duration metric: took 3.017613ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923077 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926284 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.926292 10408 pod_ready.go:81] duration metric: took 3.20987ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926297 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929448 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.929456 10408 pod_ready.go:81] duration metric: took 3.154194ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929461 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932599 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.932606 10408 pod_ready.go:81] duration metric: took 3.140486ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932610 10408 pod_ready.go:38] duration metric: took 10.527510396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:25.932619 10408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0921 15:27:25.939997 10408 ops.go:34] apiserver oom_adj: -16
I0921 15:27:25.940008 10408 kubeadm.go:631] restartCluster took 55.116747244s
I0921 15:27:25.940013 10408 kubeadm.go:398] StartCluster complete in 55.154103553s
I0921 15:27:25.940027 10408 settings.go:142] acquiring lock: {Name:mkb00f1de0b91d8f67bd982eab088d27845674b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.940102 10408 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:27:25.941204 10408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mka2f83e1cbd4124ff7179732fbb172d977cf2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.942042 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:25.944188 10408 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220921152522-3535" rescaled to 1
I0921 15:27:25.944221 10408 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0921 15:27:25.944255 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0921 15:27:25.944277 10408 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0921 15:27:25.944378 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:27:25.967437 10408 addons.go:65] Setting storage-provisioner=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967440 10408 addons.go:65] Setting default-storageclass=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967359 10408 out.go:177] * Verifying Kubernetes components...
I0921 15:27:25.967453 10408 addons.go:153] Setting addon storage-provisioner=true in "pause-20220921152522-3535"
I0921 15:27:25.967457 10408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220921152522-3535"
W0921 15:27:25.967460 10408 addons.go:162] addon storage-provisioner should already be in state true
I0921 15:27:26.012377 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:26.012436 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.012762 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012761 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012794 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.012829 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.019897 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53028
I0921 15:27:26.020028 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53029
I0921 15:27:26.020328 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020394 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020706 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020719 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020801 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020817 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020929 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021015 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021115 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.021203 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.021283 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.021419 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.021443 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.023750 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:26.027574 10408 addons.go:153] Setting addon default-storageclass=true in "pause-20220921152522-3535"
W0921 15:27:26.027587 10408 addons.go:162] addon default-storageclass should already be in state true
I0921 15:27:26.027606 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.027788 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53032
I0921 15:27:26.027854 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.027880 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.028560 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.029753 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.029767 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.030003 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.030113 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.030207 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.030282 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.031135 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.034331 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53034
I0921 15:27:26.055199 10408 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0921 15:27:26.038435 10408 node_ready.go:35] waiting up to 6m0s for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.038466 10408 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0921 15:27:26.055642 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.075151 10408 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.075161 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0921 15:27:26.075184 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.075306 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.075441 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.075451 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.075455 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.075546 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.075643 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.075669 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.076075 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.076097 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.082485 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53037
I0921 15:27:26.082858 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.083217 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.083234 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.083443 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.083534 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.083608 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.083699 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.084503 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.084648 10408 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.084657 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0921 15:27:26.084665 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.084734 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.084830 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.084916 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.085010 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.117393 10408 node_ready.go:49] node "pause-20220921152522-3535" has status "Ready":"True"
I0921 15:27:26.117403 10408 node_ready.go:38] duration metric: took 42.373374ms waiting for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.117410 10408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:26.127239 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.137634 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.319821 10408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.697611 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697627 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697784 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697793 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697804 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697836 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.697938 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697946 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697962 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712622 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712636 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712825 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712834 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712839 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712844 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712846 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712954 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712962 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712969 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712973 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712981 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.713114 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.713128 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.713142 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.735926 10408 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0921 15:27:25.446939 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:27.947781 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:26.773142 10408 addons.go:414] enableAddons completed in 828.831417ms
I0921 15:27:26.776027 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:26.776040 10408 pod_ready.go:81] duration metric: took 456.205251ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.776049 10408 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117622 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.117632 10408 pod_ready.go:81] duration metric: took 341.577773ms waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117638 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518637 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.518650 10408 pod_ready.go:81] duration metric: took 401.006674ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518660 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918763 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.918778 10408 pod_ready.go:81] duration metric: took 400.10892ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918787 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318657 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.318670 10408 pod_ready.go:81] duration metric: took 399.877205ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318678 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720230 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.720243 10408 pod_ready.go:81] duration metric: took 401.55845ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720250 10408 pod_ready.go:38] duration metric: took 2.602830576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:28.720263 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:28.720316 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:28.729887 10408 api_server.go:71] duration metric: took 2.78564504s to wait for apiserver process to appear ...
I0921 15:27:28.729899 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:28.729905 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:28.733744 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:28.734313 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:28.734323 10408 api_server.go:130] duration metric: took 4.419338ms to wait for apiserver health ...
I0921 15:27:28.734328 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:28.920241 10408 system_pods.go:59] 7 kube-system pods found
I0921 15:27:28.920257 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:28.920261 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:28.920274 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:28.920279 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:28.920283 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:28.920286 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:28.920289 10408 system_pods.go:61] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:28.920294 10408 system_pods.go:74] duration metric: took 185.961163ms to wait for pod list to return data ...
I0921 15:27:28.920300 10408 default_sa.go:34] waiting for default service account to be created ...
I0921 15:27:29.119704 10408 default_sa.go:45] found service account: "default"
I0921 15:27:29.119720 10408 default_sa.go:55] duration metric: took 199.41576ms for default service account to be created ...
I0921 15:27:29.119727 10408 system_pods.go:116] waiting for k8s-apps to be running ...
I0921 15:27:29.322362 10408 system_pods.go:86] 7 kube-system pods found
I0921 15:27:29.322375 10408 system_pods.go:89] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:29.322379 10408 system_pods.go:89] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:29.322383 10408 system_pods.go:89] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:29.322388 10408 system_pods.go:89] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:29.322391 10408 system_pods.go:89] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:29.322395 10408 system_pods.go:89] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:29.322398 10408 system_pods.go:89] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:29.322402 10408 system_pods.go:126] duration metric: took 202.671392ms to wait for k8s-apps to be running ...
I0921 15:27:29.322407 10408 system_svc.go:44] waiting for kubelet service to be running ....
I0921 15:27:29.322452 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:29.331792 10408 system_svc.go:56] duration metric: took 9.381149ms WaitForService to wait for kubelet.
I0921 15:27:29.331804 10408 kubeadm.go:573] duration metric: took 3.387565971s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0921 15:27:29.331823 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:29.518084 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:29.518100 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:29.518105 10408 node_conditions.go:105] duration metric: took 186.278888ms to run NodePressure ...
I0921 15:27:29.518113 10408 start.go:216] waiting for startup goroutines ...
I0921 15:27:29.551427 10408 start.go:506] kubectl: 1.25.0, cluster: 1.25.2 (minor skew: 0)
I0921 15:27:29.611327 10408 out.go:177] * Done! kubectl is now configured to use "pause-20220921152522-3535" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Wed 2022-09-21 22:25:29 UTC, ends at Wed 2022-09-21 22:27:30 UTC. --
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.405457988Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64651e97bf148aa1e9fbcad6bfbec4d1e8535ad920f0d5c47cd57190f6804445 pid=5990 runtime=io.containerd.runc.v2
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406210133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406245445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406253448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406435610Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/207eee071672f5cc181475db6e621afacd6722bc026b03a3b344ad50e1cefc78 pid=5992 runtime=io.containerd.runc.v2
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422862395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422958571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422967730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.423253250Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/534b0d7cd88d7c2d979cc7e5c6eb29977494de71ff82fec3d02420ecb80a30b9 pid=6024 runtime=io.containerd.runc.v2
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785293775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785363542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785372748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785536470Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1650473a18ef5642e63da9873326d2ed8d331ce75d182aaf5834afe35d8f1c48 pid=6217 runtime=io.containerd.runc.v2
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098886881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098975354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098986289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.099142849Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/152338a53f1e4e1033c391833e8d6cba34a8c41caa549b9524e155354c7edd68 pid=6265 runtime=io.containerd.runc.v2
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192601808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192670528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192679056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192948353Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c41fc7d463dbce833eb22fe2cbe7272c863767af9f5ce4eb37b36c8efa33b012 pid=6532 runtime=io.containerd.runc.v2
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493268572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493331709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493341289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493781950Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e6a3aeef0ff7cec28ea93bae81a53252f4adbfe81f9da2e64add46df53fa77f2 pid=6573 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e6a3aeef0ff7c 6e38f40d628db 3 seconds ago Running storage-provisioner 0 c41fc7d463dbc
152338a53f1e4 1c7d8c51823b5 14 seconds ago Running kube-proxy 3 f67bd5c5d43e1
1650473a18ef5 5185b96f0becf 15 seconds ago Running coredns 2 92cc25df1c118
64651e97bf148 a8a176a5d5d69 23 seconds ago Running etcd 3 0249ca0da9611
207eee071672f ca0ea1ee3cfd3 23 seconds ago Running kube-scheduler 3 522a493620409
534b0d7cd88d7 dbfceb93c69b6 23 seconds ago Running kube-controller-manager 3 f60c5ce6318fc
b6d4531497f33 97801f8394908 28 seconds ago Running kube-apiserver 3 0ca250926532e
d7cbc4c453b05 ca0ea1ee3cfd3 39 seconds ago Exited kube-scheduler 2 1a3e01fca5715
823942ffecb6f dbfceb93c69b6 42 seconds ago Exited kube-controller-manager 2 e1129956136e0
283fac289f860 a8a176a5d5d69 43 seconds ago Exited etcd 2 eb1318ed7bcc9
c2e8fe8419a96 1c7d8c51823b5 44 seconds ago Exited kube-proxy 2 994dd806c8bfd
4934b6e15931f 5185b96f0becf About a minute ago Exited coredns 1 163c82f50ebf1
3a4741e1fe3c0 97801f8394908 About a minute ago Exited kube-apiserver 2 3d0143698c2dc
*
* ==> coredns [1650473a18ef] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [4934b6e15931] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
*
* ==> describe nodes <==
* Name: pause-20220921152522-3535
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220921152522-3535
kubernetes.io/os=linux
minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
minikube.k8s.io/name=pause-20220921152522-3535
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_09_21T15_25_59_0700
minikube.k8s.io/version=v1.27.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 21 Sep 2022 22:25:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220921152522-3535
AcquireTime: <unset>
RenewTime: Wed, 21 Sep 2022 22:27:24 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:26:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.28
Hostname: pause-20220921152522-3535
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 0962272db386446fb19d5815e48c70e2
System UUID: 485511ed-0000-0000-82c9-149d997fca88
Boot ID: e52786ed-2040-47a8-9190-c9c808b4a98b
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.18
Kubelet Version: v1.25.2
Kube-Proxy Version: v1.25.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-9wtnp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 80s
kube-system etcd-pause-20220921152522-3535 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system kube-apiserver-pause-20220921152522-3535 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system kube-controller-manager-pause-20220921152522-3535 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system kube-proxy-5c7jc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 80s
kube-system kube-scheduler-pause-20220921152522-3535 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 92s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 78s kube-proxy
Normal Starting 14s kube-proxy
Normal Starting 63s kube-proxy
Normal NodeHasSufficientPID 106s (x5 over 106s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 106s (x6 over 106s) kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 106s (x6 over 106s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal Starting 92s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 92s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 92s kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 92s kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 92s kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal NodeReady 82s kubelet Node pause-20220921152522-3535 status is now: NodeReady
Normal RegisteredNode 80s node-controller Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 24s (x8 over 25s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24s (x8 over 25s) kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24s (x7 over 25s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal RegisteredNode 4s node-controller Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.836758] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.731337] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +0.090984] systemd-fstab-generator[541]: Ignoring "noauto" for root device
[ +5.027202] systemd-fstab-generator[762]: Ignoring "noauto" for root device
[ +1.197234] kauditd_printk_skb: 16 callbacks suppressed
[ +0.214769] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.091300] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.097321] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.296604] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
[ +0.087737] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
[ +3.910315] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
[ +0.546371] kauditd_printk_skb: 68 callbacks suppressed
[ +13.692006] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[Sep21 22:26] kauditd_printk_skb: 8 callbacks suppressed
[ +8.344097] systemd-fstab-generator[2768]: Ignoring "noauto" for root device
[ +0.136976] systemd-fstab-generator[2779]: Ignoring "noauto" for root device
[ +0.134278] systemd-fstab-generator[2790]: Ignoring "noauto" for root device
[ +0.497533] kauditd_printk_skb: 17 callbacks suppressed
[ +7.690771] systemd-fstab-generator[4167]: Ignoring "noauto" for root device
[ +0.127432] systemd-fstab-generator[4182]: Ignoring "noauto" for root device
[ +31.144308] kauditd_printk_skb: 34 callbacks suppressed
[Sep21 22:27] systemd-fstab-generator[5830]: Ignoring "noauto" for root device
*
* ==> etcd [283fac289f86] <==
* {"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:26:49.366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 is starting a new election at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became pre-candidate at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgPreVoteResp from d3378a43e4252963 at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became candidate at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgVoteResp from d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became leader at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3378a43e4252963 elected leader d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3378a43e4252963","local-member-attributes":"{Name:pause-20220921152522-3535 ClientURLs:[https://192.168.64.28:2379]}","request-path":"/0/members/d3378a43e4252963/attributes","cluster-id":"e703c3abd1a7846","publish-timeout":"7s"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:26:49.368Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-09-21T22:26:49.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.28:2379"}
{"level":"info","ts":"2022-09-21T22:26:49.375Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-09-21T22:26:49.376Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-09-21T22:27:00.388Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-09-21T22:27:00.388Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220921152522-3535","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"]}
WARNING: 2022/09/21 22:27:00 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/09/21 22:27:00 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.28:2379 192.168.64.28:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.28:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-09-21T22:27:00.391Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d3378a43e4252963","current-leader-member-id":"d3378a43e4252963"}
{"level":"info","ts":"2022-09-21T22:27:00.392Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:00.394Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:00.394Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220921152522-3535","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"]}
*
* ==> etcd [64651e97bf14] <==
* {"level":"info","ts":"2022-09-21T22:27:08.280Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d3378a43e4252963","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-09-21T22:27:08.282Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3378a43e4252963","initial-advertise-peer-urls":["https://192.168.64.28:2380"],"listen-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.28:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-09-21T22:27:08.282Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 switched to configuration voters=(15219785489916963171)"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e703c3abd1a7846","local-member-id":"d3378a43e4252963","added-peer-id":"d3378a43e4252963","added-peer-peer-urls":["https://192.168.64.28:2380"]}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e703c3abd1a7846","local-member-id":"d3378a43e4252963","cluster-version":"3.5"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:08.285Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 is starting a new election at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became pre-candidate at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgPreVoteResp from d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became candidate at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgVoteResp from d3378a43e4252963 at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became leader at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3378a43e4252963 elected leader d3378a43e4252963 at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3378a43e4252963","local-member-attributes":"{Name:pause-20220921152522-3535 ClientURLs:[https://192.168.64.28:2379]}","request-path":"/0/members/d3378a43e4252963/attributes","cluster-id":"e703c3abd1a7846","publish-timeout":"7s"}
{"level":"info","ts":"2022-09-21T22:27:09.548Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:27:09.548Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.28:2379"}
{"level":"info","ts":"2022-09-21T22:27:09.549Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:27:09.549Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-09-21T22:27:09.550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-09-21T22:27:09.550Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 22:27:31 up 2 min, 0 users, load average: 0.39, 0.20, 0.08
Linux pause-20220921152522-3535 5.10.57 #1 SMP Sat Sep 10 02:24:46 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [3a4741e1fe3c] <==
* W0921 22:26:42.249889 1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0921 22:26:42.252491 1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0921 22:26:47.844900 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0921 22:26:51.410448 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [b6d4531497f3] <==
* I0921 22:27:14.062878 1 controller.go:85] Starting OpenAPI controller
I0921 22:27:14.063014 1 controller.go:85] Starting OpenAPI V3 controller
I0921 22:27:14.063120 1 naming_controller.go:291] Starting NamingConditionController
I0921 22:27:14.063157 1 establishing_controller.go:76] Starting EstablishingController
I0921 22:27:14.063169 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0921 22:27:14.063271 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0921 22:27:14.063303 1 crd_finalizer.go:266] Starting CRDFinalizer
I0921 22:27:14.071305 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0921 22:27:14.072396 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0921 22:27:14.156918 1 cache.go:39] Caches are synced for autoregister controller
I0921 22:27:14.157381 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0921 22:27:14.159134 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0921 22:27:14.160295 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0921 22:27:14.162748 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0921 22:27:14.164291 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0921 22:27:14.214291 1 shared_informer.go:262] Caches are synced for node_authorizer
I0921 22:27:14.252859 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0921 22:27:14.849364 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0921 22:27:15.061773 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0921 22:27:15.487959 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0921 22:27:15.496083 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0921 22:27:15.512729 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0921 22:27:15.525104 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0921 22:27:15.528873 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0921 22:27:26.810346 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [534b0d7cd88d] <==
* I0921 22:27:27.091965 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0921 22:27:27.092105 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-20220921152522-3535. Assuming now as a timestamp.
I0921 22:27:27.092144 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0921 22:27:27.092272 1 event.go:294] "Event occurred" object="pause-20220921152522-3535" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller"
I0921 22:27:27.110604 1 shared_informer.go:262] Caches are synced for TTL
I0921 22:27:27.111981 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0921 22:27:27.112202 1 shared_informer.go:262] Caches are synced for HPA
I0921 22:27:27.112592 1 shared_informer.go:262] Caches are synced for TTL after finished
I0921 22:27:27.115223 1 shared_informer.go:262] Caches are synced for namespace
I0921 22:27:27.118788 1 shared_informer.go:262] Caches are synced for job
I0921 22:27:27.122949 1 shared_informer.go:262] Caches are synced for cronjob
I0921 22:27:27.126944 1 shared_informer.go:262] Caches are synced for endpoint
I0921 22:27:27.160485 1 shared_informer.go:262] Caches are synced for expand
I0921 22:27:27.173668 1 shared_informer.go:262] Caches are synced for persistent volume
I0921 22:27:27.175944 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0921 22:27:27.203878 1 shared_informer.go:262] Caches are synced for attach detach
I0921 22:27:27.211345 1 shared_informer.go:262] Caches are synced for PV protection
I0921 22:27:27.216091 1 shared_informer.go:262] Caches are synced for resource quota
I0921 22:27:27.220621 1 shared_informer.go:262] Caches are synced for stateful set
I0921 22:27:27.261055 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0921 22:27:27.269364 1 shared_informer.go:262] Caches are synced for resource quota
I0921 22:27:27.311010 1 shared_informer.go:262] Caches are synced for daemon sets
I0921 22:27:27.654916 1 shared_informer.go:262] Caches are synced for garbage collector
I0921 22:27:27.686746 1 shared_informer.go:262] Caches are synced for garbage collector
I0921 22:27:27.686841 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [823942ffecb6] <==
* I0921 22:26:49.430074 1 serving.go:348] Generated self-signed cert in-memory
I0921 22:26:50.068771 1 controllermanager.go:178] Version: v1.25.2
I0921 22:26:50.068811 1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:26:50.069610 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0921 22:26:50.069706 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0921 22:26:50.069775 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0921 22:26:50.070146 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
*
* ==> kube-proxy [152338a53f1e] <==
* I0921 22:27:16.200105 1 node.go:163] Successfully retrieved node IP: 192.168.64.28
I0921 22:27:16.200255 1 server_others.go:138] "Detected node IP" address="192.168.64.28"
I0921 22:27:16.200284 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0921 22:27:16.220796 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0921 22:27:16.220810 1 server_others.go:206] "Using iptables Proxier"
I0921 22:27:16.220829 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0921 22:27:16.221038 1 server.go:661] "Version info" version="v1.25.2"
I0921 22:27:16.221047 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:27:16.221421 1 config.go:317] "Starting service config controller"
I0921 22:27:16.221427 1 shared_informer.go:255] Waiting for caches to sync for service config
I0921 22:27:16.221438 1 config.go:226] "Starting endpoint slice config controller"
I0921 22:27:16.221440 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0921 22:27:16.221790 1 config.go:444] "Starting node config controller"
I0921 22:27:16.221831 1 shared_informer.go:255] Waiting for caches to sync for node config
I0921 22:27:16.321553 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0921 22:27:16.321868 1 shared_informer.go:262] Caches are synced for service config
I0921 22:27:16.322427 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [c2e8fe8419a9] <==
* E0921 22:26:52.417919 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.64.28:45762->192.168.64.28:8443: read: connection reset by peer
E0921 22:26:53.525473 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:55.541635 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.072196 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
*
* ==> kube-scheduler [207eee071672] <==
* I0921 22:27:07.942128 1 serving.go:348] Generated self-signed cert in-memory
W0921 22:27:14.136528 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0921 22:27:14.136587 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0921 22:27:14.136596 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0921 22:27:14.136622 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0921 22:27:14.160522 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.2"
I0921 22:27:14.160612 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:27:14.161435 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0921 22:27:14.161580 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0921 22:27:14.163051 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 22:27:14.161599 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0921 22:27:14.263724 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [d7cbc4c453b0] <==
* W0921 22:26:56.662066 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.64.28:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.662326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.28:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.676873 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.677417 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.727262 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.64.28:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.727389 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.28:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.792874 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.28:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.792933 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.28:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:57.019135 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:57.019287 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:57.111170 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.64.28:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:57.111256 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.64.28:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:59.563534 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.64.28:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:59.563559 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.28:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:59.965353 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:59.965379 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:27:00.044825 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.64.28:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.044871 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.28:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:27:00.384285 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.384326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.398528 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 22:27:00.398546 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0921 22:27:00.398572 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0921 22:27:00.398622 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0921 22:27:00.398861 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Wed 2022-09-21 22:25:29 UTC, ends at Wed 2022-09-21 22:27:32 UTC. --
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.739144 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.839713 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.940319 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:14.040786 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.141509 5836 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.142001 5836 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.235105 5836 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220921152522-3535"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.235257 5836 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220921152522-3535"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.845723 5836 apiserver.go:52] "Watching apiserver"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.847588 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.847682 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951602 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-kube-proxy\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951731 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-lib-modules\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951776 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-xtables-lock\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951850 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb8f3bae-6107-4a2b-ba32-d79405830bf0-config-volume\") pod \"coredns-565d847f94-9wtnp\" (UID: \"eb8f3bae-6107-4a2b-ba32-d79405830bf0\") " pod="kube-system/coredns-565d847f94-9wtnp"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951882 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2kwd\" (UniqueName: \"kubernetes.io/projected/eb8f3bae-6107-4a2b-ba32-d79405830bf0-kube-api-access-p2kwd\") pod \"coredns-565d847f94-9wtnp\" (UID: \"eb8f3bae-6107-4a2b-ba32-d79405830bf0\") " pod="kube-system/coredns-565d847f94-9wtnp"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951915 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2rf\" (UniqueName: \"kubernetes.io/projected/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-kube-api-access-zh2rf\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951971 5836 reconciler.go:169] "Reconciler: start to sync state"
Sep 21 22:27:15 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:15.748097 5836 scope.go:115] "RemoveContainer" containerID="4934b6e15931f96c8cd7409c9d9d107463001d3dbbe402bc7ecacd045cfdf26e"
Sep 21 22:27:16 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:16.049291 5836 scope.go:115] "RemoveContainer" containerID="c2e8fe8419a96380dd14dec68931ed3399dbf26a6ff33aace75ae52a339d8568"
Sep 21 22:27:23 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:23.685529 5836 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.821517 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.979546 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f71f00f0-f421-45c2-bfe4-c1e99f11b8e5-tmp\") pod \"storage-provisioner\" (UID: \"f71f00f0-f421-45c2-bfe4-c1e99f11b8e5\") " pod="kube-system/storage-provisioner"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.979717 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2k8\" (UniqueName: \"kubernetes.io/projected/f71f00f0-f421-45c2-bfe4-c1e99f11b8e5-kube-api-access-tv2k8\") pod \"storage-provisioner\" (UID: \"f71f00f0-f421-45c2-bfe4-c1e99f11b8e5\") " pod="kube-system/storage-provisioner"
Sep 21 22:27:27 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:27.456744 5836 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c41fc7d463dbce833eb22fe2cbe7272c863767af9f5ce4eb37b36c8efa33b012"
*
* ==> storage-provisioner [e6a3aeef0ff7] <==
* I0921 22:27:27.575776 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0921 22:27:27.585007 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0921 22:27:27.585247 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0921 22:27:27.589937 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0921 22:27:27.590215 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1!
I0921 22:27:27.591354 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cea77369-71af-4aec-8a4d-59cc48396b09", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1 became leader
I0921 22:27:27.690985 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220921152522-3535 -n pause-20220921152522-3535
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220921152522-3535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220921152522-3535 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220921152522-3535 describe pod : exit status 1 (36.087493ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220921152522-3535 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220921152522-3535 -n pause-20220921152522-3535
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-20220921152522-3535 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220921152522-3535 logs -n 25: (2.728061495s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:20 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | kubernetes-upgrade-20220921151918-3535 | jenkins | v1.27.0 | 21 Sep 22 15:21 PDT | 21 Sep 22 15:21 PDT |
| | kubernetes-upgrade-20220921151918-3535 | | | | | |
| start | -p | cert-expiration-20220921151821-3535 | jenkins | v1.27.0 | 21 Sep 22 15:22 PDT | 21 Sep 22 15:22 PDT |
| | cert-expiration-20220921151821-3535 | | | | | |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | cert-expiration-20220921151821-3535 | jenkins | v1.27.0 | 21 Sep 22 15:22 PDT | 21 Sep 22 15:22 PDT |
| | cert-expiration-20220921151821-3535 | | | | | |
| start | -p | stopped-upgrade-20220921152137-3535 | jenkins | v1.27.0 | 21 Sep 22 15:23 PDT | 21 Sep 22 15:24 PDT |
| | stopped-upgrade-20220921152137-3535 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=hyperkit | | | | | |
| start | -p | running-upgrade-20220921152233-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:25 PDT |
| | running-upgrade-20220921152233-3535 | | | | | |
| | --memory=2200 --alsologtostderr | | | | | |
| | -v=1 --driver=hyperkit | | | | | |
| delete | -p | stopped-upgrade-20220921152137-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:24 PDT |
| | stopped-upgrade-20220921152137-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:24 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | running-upgrade-20220921152233-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | running-upgrade-20220921152233-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-20220921152522-3535 | pause-20220921152522-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:26 PDT |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| delete | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --no-kubernetes | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| profile | list | minikube | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| profile | list --output=json | minikube | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| stop | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:25 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:25 PDT | 21 Sep 22 15:26 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | |
| | NoKubernetes-20220921152435-3535 | | | | | |
| | sudo systemctl is-active --quiet | | | | | |
| | service kubelet | | | | | |
| delete | -p | NoKubernetes-20220921152435-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | 21 Sep 22 15:26 PDT |
| | NoKubernetes-20220921152435-3535 | | | | | |
| start | -p false-20220921151637-3535 | false-20220921151637-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | |
| | --memory=2048 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --wait-timeout=5m --cni=false | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-20220921152522-3535 | pause-20220921152522-3535 | jenkins | v1.27.0 | 21 Sep 22 15:26 PDT | 21 Sep 22 15:27 PDT |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
|---------|----------------------------------------|----------------------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/09/21 15:26:16
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.19.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0921 15:26:16.412297 10408 out.go:296] Setting OutFile to fd 1 ...
I0921 15:26:16.412857 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.412883 10408 out.go:309] Setting ErrFile to fd 2...
I0921 15:26:16.412925 10408 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0921 15:26:16.413172 10408 root.go:333] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin
I0921 15:26:16.413935 10408 out.go:303] Setting JSON to false
I0921 15:26:16.429337 10408 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5147,"bootTime":1663794029,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.6","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0921 15:26:16.429439 10408 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0921 15:26:16.451061 10408 out.go:177] * [pause-20220921152522-3535] minikube v1.27.0 on Darwin 12.6
I0921 15:26:16.492895 10408 notify.go:214] Checking for updates...
I0921 15:26:16.513942 10408 out.go:177] - MINIKUBE_LOCATION=14995
I0921 15:26:16.535147 10408 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:26:16.555899 10408 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0921 15:26:16.577004 10408 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0921 15:26:16.598036 10408 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube
I0921 15:26:16.619232 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:16.619572 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.619620 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.626042 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52950
I0921 15:26:16.626541 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.626992 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.627004 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.627211 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.627372 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.627501 10408 driver.go:365] Setting default libvirt URI to qemu:///system
I0921 15:26:16.627783 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.627806 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:16.634000 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52952
I0921 15:26:16.634367 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:16.634679 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:16.634691 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:16.634960 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:16.635067 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:16.661930 10408 out.go:177] * Using the hyperkit driver based on existing profile
I0921 15:26:16.703890 10408 start.go:284] selected driver: hyperkit
I0921 15:26:16.703910 10408 start.go:808] validating driver "hyperkit" against &{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterNam
e:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.704025 10408 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0921 15:26:16.704092 10408 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.704203 10408 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0921 15:26:16.710571 10408 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.27.0
I0921 15:26:16.713621 10408 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:16.713649 10408 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0921 15:26:16.715630 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:16.715647 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:16.715664 10408 start_flags.go:316] config:
{Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:16.715818 10408 iso.go:124] acquiring lock: {Name:mke8c57399926d29e846b47dd4be4625ba5fcaea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0921 15:26:16.774023 10408 out.go:177] * Starting control plane node pause-20220921152522-3535 in cluster pause-20220921152522-3535
I0921 15:26:14.112290 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
I0921 15:26:14.112374 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
I0921 15:26:14.112386 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | 2022/09/21 15:26:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
I0921 15:26:15.320346 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Attempt 3
I0921 15:26:15.320365 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:15.320474 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:15.321107 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Searching for 36:15:df:cc:5b:5b in /var/db/dhcpd_leases ...
I0921 15:26:15.321174 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found 28 entries in /var/db/dhcpd_leases!
I0921 15:26:15.321185 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.29 HWAddress:3e:7a:92:24:5:ce ID:1,3e:7a:92:24:5:ce Lease:0x632b8f7f}
I0921 15:26:15.321194 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.28 HWAddress:c2:90:21:6e:75:6 ID:1,c2:90:21:6e:75:6 Lease:0x632ce0da}
I0921 15:26:15.321202 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:9e:f3:b1:1c:9b:1c ID:1,9e:f3:b1:1c:9b:1c Lease:0x632b8f54}
I0921 15:26:15.321211 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:66:c5:83:6d:55:91 ID:1,66:c5:83:6d:55:91 Lease:0x632ce03b}
I0921 15:26:15.321220 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:ea:9c:f4:77:1d:3d ID:1,ea:9c:f4:77:1d:3d Lease:0x632ce076}
I0921 15:26:15.321227 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:36:e:45:14:25:55 ID:1,36:e:45:14:25:55 Lease:0x632cdfb6}
I0921 15:26:15.321236 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:92:2e:30:54:49:f3 ID:1,92:2e:30:54:49:f3 Lease:0x632b8de5}
I0921 15:26:15.321243 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:83:83:3:65:1a ID:1,1a:83:83:3:65:1a Lease:0x632cdf36}
I0921 15:26:15.321252 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:b6:1a:2d:8:65:c5 ID:1,b6:1a:2d:8:65:c5 Lease:0x632cdf16}
I0921 15:26:15.321259 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:72:4c:c8:cf:4f:63 ID:1,72:4c:c8:cf:4f:63 Lease:0x632b8dac}
I0921 15:26:15.321274 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:c2:f8:ac:87:d9:f0 ID:1,c2:f8:ac:87:d9:f0 Lease:0x632b8d80}
I0921 15:26:15.321291 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:62:35:c1:26:64:c0 ID:1,62:35:c1:26:64:c0 Lease:0x632b8d81}
I0921 15:26:15.321303 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:96:24:b5:8e:13:fc ID:1,96:24:b5:8e:13:fc Lease:0x632cde86}
I0921 15:26:15.321315 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:e:f1:67:89:3f:e3 ID:1,e:f1:67:89:3f:e3 Lease:0x632cde14}
I0921 15:26:15.321324 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:a2:3d:49:78:3b:4c ID:1,a2:3d:49:78:3b:4c Lease:0x632cdd68}
I0921 15:26:15.321339 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:1a:dd:bc:c:73:c4 ID:1,1a:dd:bc:c:73:c4 Lease:0x632cdd35}
I0921 15:26:15.321350 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:52:e5:24:3b:ab:4 ID:1,52:e5:24:3b:ab:4 Lease:0x632b897b}
I0921 15:26:15.321358 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:b4:fe:f4:b1:24 ID:1,be:b4:fe:f4:b1:24 Lease:0x632b8bde}
I0921 15:26:15.321365 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:8a:c8:9b:80:80:10 ID:1,8a:c8:9b:80:80:10 Lease:0x632b8bdc}
I0921 15:26:15.321376 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:12:72:ad:9f:f1:8f ID:1,12:72:ad:9f:f1:8f Lease:0x632b8511}
I0921 15:26:15.321387 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:4a:58:20:58:21:84 ID:1,4a:58:20:58:21:84 Lease:0x632b84fc}
I0921 15:26:15.321395 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:4e:eb:64:20:d8:40 ID:1,4e:eb:64:20:d8:40 Lease:0x632b84d4}
I0921 15:26:15.321404 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:96:cb:c8:56:48:73 ID:1,96:cb:c8:56:48:73 Lease:0x632cd609}
I0921 15:26:15.321411 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:60:ad:7c:55:a0 ID:1,3e:60:ad:7c:55:a0 Lease:0x632cd5c9}
I0921 15:26:15.321418 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:2:7a:1a:6a:a6:1f ID:1,2:7a:1a:6a:a6:1f Lease:0x632b843f}
I0921 15:26:15.321426 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:9a:e7:f8:d0:27:5a ID:1,9a:e7:f8:d0:27:5a Lease:0x632cd449}
I0921 15:26:15.321434 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:12:80:14:fc:de:ba ID:1,12:80:14:fc:de:ba Lease:0x632b82be}
I0921 15:26:15.321440 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:56:cf:47:52:47:7e ID:1,56:cf:47:52:47:7e Lease:0x632b8281}
I0921 15:26:17.321647 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Attempt 4
I0921 15:26:17.321668 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:17.321761 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:17.322288 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Searching for 36:15:df:cc:5b:5b in /var/db/dhcpd_leases ...
I0921 15:26:17.322356 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found 29 entries in /var/db/dhcpd_leases!
I0921 15:26:17.322367 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.30 HWAddress:36:15:df:cc:5b:5b ID:1,36:15:df:cc:5b:5b Lease:0x632ce108}
I0921 15:26:17.322380 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Found match: 36:15:df:cc:5b:5b
I0921 15:26:17.322390 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | IP: 192.168.64.30
I0921 15:26:17.322428 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetConfigRaw
I0921 15:26:17.322951 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:17.323049 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:17.323142 10389 main.go:134] libmachine: Waiting for machine to be running, this may take a few minutes...
I0921 15:26:17.323154 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:17.323221 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:17.323276 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:17.323815 10389 main.go:134] libmachine: Detecting operating system of created instance...
I0921 15:26:17.323821 10389 main.go:134] libmachine: Waiting for SSH to be available...
I0921 15:26:17.323832 10389 main.go:134] libmachine: Getting to WaitForSSH function...
I0921 15:26:17.323840 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:17.323909 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:17.323997 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:17.324070 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:17.324148 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:17.324242 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:17.324383 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:17.324389 10389 main.go:134] libmachine: About to run SSH command:
exit 0
I0921 15:26:16.794876 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:16.794956 10408 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4
I0921 15:26:16.795012 10408 cache.go:57] Caching tarball of preloaded images
I0921 15:26:16.795122 10408 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0921 15:26:16.795144 10408 cache.go:60] Finished verifying existence of preloaded tar for v1.25.2 on docker
I0921 15:26:16.795239 10408 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/config.json ...
I0921 15:26:16.795594 10408 cache.go:208] Successfully downloaded all kic artifacts
I0921 15:26:16.795620 10408 start.go:364] acquiring machines lock for pause-20220921152522-3535: {Name:mk2f7774d81f069136708da9f7558413d7930511 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0921 15:26:19.803647 10408 start.go:368] acquired machines lock for "pause-20220921152522-3535" in 3.008011859s
I0921 15:26:19.803693 10408 start.go:96] Skipping create...Using existing machine configuration
I0921 15:26:19.803704 10408 fix.go:55] fixHost starting:
I0921 15:26:19.804014 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:19.804040 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:19.810489 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52975
I0921 15:26:19.810845 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:19.811156 10408 main.go:134] libmachine: Using API Version 1
I0921 15:26:19.811167 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:19.811357 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:19.811458 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.811557 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:26:19.811664 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:19.811739 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:26:19.812542 10408 fix.go:103] recreateIfNeeded on pause-20220921152522-3535: state=Running err=<nil>
W0921 15:26:19.812564 10408 fix.go:129] unexpected machine state, will restart: <nil>
I0921 15:26:19.835428 10408 out.go:177] * Updating the running hyperkit "pause-20220921152522-3535" VM ...
I0921 15:26:19.856170 10408 machine.go:88] provisioning docker machine ...
I0921 15:26:19.856192 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:19.856377 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856478 10408 buildroot.go:166] provisioning hostname "pause-20220921152522-3535"
I0921 15:26:19.856489 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.856574 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.856646 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.856744 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856835 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.856914 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.857028 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.857193 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.857203 10408 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220921152522-3535 && echo "pause-20220921152522-3535" | sudo tee /etc/hostname
I0921 15:26:19.929633 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220921152522-3535
I0921 15:26:19.929693 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.929883 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:19.930020 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930143 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:19.930253 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:19.930438 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.930577 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:19.930595 10408 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220921152522-3535' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220921152522-3535/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220921152522-3535' | sudo tee -a /etc/hosts;
fi
fi
I0921 15:26:19.992780 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:19.992803 10408 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
I0921 15:26:19.992832 10408 buildroot.go:174] setting up certificates
I0921 15:26:19.992843 10408 provision.go:83] configureAuth start
I0921 15:26:19.992852 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetMachineName
I0921 15:26:19.993017 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:19.993132 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:19.993213 10408 provision.go:138] copyHostCerts
I0921 15:26:19.993302 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
I0921 15:26:19.993310 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
I0921 15:26:19.993450 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
I0921 15:26:19.993643 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
I0921 15:26:19.993649 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
I0921 15:26:19.993780 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1679 bytes)
I0921 15:26:19.994087 10408 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
I0921 15:26:19.994094 10408 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
I0921 15:26:19.994203 10408 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
I0921 15:26:19.994341 10408 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.pause-20220921152522-3535 san=[192.168.64.28 192.168.64.28 localhost 127.0.0.1 minikube pause-20220921152522-3535]
I0921 15:26:20.145157 10408 provision.go:172] copyRemoteCerts
I0921 15:26:20.145229 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0921 15:26:20.145247 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.145395 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.145492 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.145591 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.145687 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.181860 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0921 15:26:20.204288 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0921 15:26:20.223046 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0921 15:26:20.242859 10408 provision.go:86] duration metric: configureAuth took 250.000259ms
I0921 15:26:20.242872 10408 buildroot.go:189] setting minikube options for container-runtime
I0921 15:26:20.243031 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:20.243050 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.243218 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.243320 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.243440 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243555 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.243661 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.243798 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.243914 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.243922 10408 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0921 15:26:20.307004 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0921 15:26:20.307030 10408 buildroot.go:70] root file system type: tmpfs
I0921 15:26:20.307188 10408 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0921 15:26:20.307206 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.307379 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.307501 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307587 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.307679 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.307823 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.307954 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.308011 10408 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0921 15:26:20.380017 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0921 15:26:20.380044 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.380193 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.380302 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380410 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.380514 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.380665 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.380781 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.380797 10408 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0921 15:26:20.447616 10408 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:20.447629 10408 machine.go:91] provisioned docker machine in 591.445478ms
I0921 15:26:20.447641 10408 start.go:300] post-start starting for "pause-20220921152522-3535" (driver="hyperkit")
I0921 15:26:20.447646 10408 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0921 15:26:20.447659 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.447885 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0921 15:26:20.447901 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.448051 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.448156 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.448291 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.448405 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.484862 10408 ssh_runner.go:195] Run: cat /etc/os-release
I0921 15:26:20.487726 10408 info.go:137] Remote host: Buildroot 2021.02.12
I0921 15:26:20.487742 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
I0921 15:26:20.487867 10408 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
I0921 15:26:20.488046 10408 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem -> 35352.pem in /etc/ssl/certs
I0921 15:26:20.488202 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0921 15:26:20.495074 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:20.515167 10408 start.go:303] post-start completed in 67.502258ms
I0921 15:26:20.515187 10408 fix.go:57] fixHost completed within 711.484594ms
I0921 15:26:20.515203 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.515368 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.515520 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515638 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.515770 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.515941 10408 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:20.516053 10408 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.28 22 <nil> <nil>}
I0921 15:26:20.516063 10408 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0921 15:26:20.577712 10408 main.go:134] libmachine: SSH cmd err, output: <nil>: 1663799180.686854068
I0921 15:26:20.577735 10408 fix.go:207] guest clock: 1663799180.686854068
I0921 15:26:20.577746 10408 fix.go:220] Guest: 2022-09-21 15:26:20.686854068 -0700 PDT Remote: 2022-09-21 15:26:20.51519 -0700 PDT m=+4.146234536 (delta=171.664068ms)
I0921 15:26:20.577765 10408 fix.go:191] guest clock delta is within tolerance: 171.664068ms
I0921 15:26:20.577770 10408 start.go:83] releasing machines lock for "pause-20220921152522-3535", held for 774.111447ms
I0921 15:26:20.577789 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.577928 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetIP
I0921 15:26:20.578042 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578174 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578318 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578705 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:26:20.578906 10408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0921 15:26:20.578961 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.578984 10408 ssh_runner.go:195] Run: systemctl --version
I0921 15:26:20.578999 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:26:20.579066 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579106 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:26:20.579182 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579228 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:26:20.579290 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579338 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:26:20.579415 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.579448 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:26:20.650058 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:20.650150 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:20.668593 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:20.668610 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:20.668676 10408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0921 15:26:20.679656 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0921 15:26:20.692651 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:20.702013 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0921 15:26:20.715942 10408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0921 15:26:20.844184 10408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0921 15:26:20.974988 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:21.117162 10408 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:18.404949 10389 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:18.404961 10389 main.go:134] libmachine: Detecting the provisioner...
I0921 15:26:18.404967 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.405102 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.405195 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.405274 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.405369 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.405482 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.405601 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.405610 10389 main.go:134] libmachine: About to run SSH command:
cat /etc/os-release
I0921 15:26:18.483176 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2021.02.12-1-g1be7c81-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"
I0921 15:26:18.483226 10389 main.go:134] libmachine: found compatible host: buildroot
I0921 15:26:18.483233 10389 main.go:134] libmachine: Provisioning with buildroot...
I0921 15:26:18.483245 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.483380 10389 buildroot.go:166] provisioning hostname "false-20220921151637-3535"
I0921 15:26:18.483392 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.483485 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.483579 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.483675 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.483757 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.483857 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.483983 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.484098 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.484107 10389 main.go:134] libmachine: About to run SSH command:
sudo hostname false-20220921151637-3535 && echo "false-20220921151637-3535" | sudo tee /etc/hostname
I0921 15:26:18.570488 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: false-20220921151637-3535
I0921 15:26:18.570510 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.570653 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.570761 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.570862 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.570935 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.571055 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:18.571174 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:18.571186 10389 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\sfalse-20220921151637-3535' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20220921151637-3535/g' /etc/hosts;
else
echo '127.0.1.1 false-20220921151637-3535' | sudo tee -a /etc/hosts;
fi
fi
I0921 15:26:18.653580 10389 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0921 15:26:18.653600 10389 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem ServerCertRemotePath
:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube}
I0921 15:26:18.653620 10389 buildroot.go:174] setting up certificates
I0921 15:26:18.653630 10389 provision.go:83] configureAuth start
I0921 15:26:18.653637 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetMachineName
I0921 15:26:18.653765 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:18.653853 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.653932 10389 provision.go:138] copyHostCerts
I0921 15:26:18.654006 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem, removing ...
I0921 15:26:18.654013 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem
I0921 15:26:18.654127 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.pem (1078 bytes)
I0921 15:26:18.654316 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem, removing ...
I0921 15:26:18.654322 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem
I0921 15:26:18.654389 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cert.pem (1123 bytes)
I0921 15:26:18.654553 10389 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem, removing ...
I0921 15:26:18.654559 10389 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem
I0921 15:26:18.654614 10389 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/key.pem (1679 bytes)
I0921 15:26:18.654728 10389 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem org=jenkins.false-20220921151637-3535 san=[192.168.64.30 192.168.64.30 localhost 127.0.0.1 minikube false-20220921151637-3535]
I0921 15:26:18.931086 10389 provision.go:172] copyRemoteCerts
I0921 15:26:18.931145 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0921 15:26:18.931162 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:18.931342 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:18.931454 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:18.931547 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:18.931640 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:18.977451 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0921 15:26:18.993393 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
I0921 15:26:19.009261 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0921 15:26:19.024820 10389 provision.go:86] duration metric: configureAuth took 371.177848ms
I0921 15:26:19.024832 10389 buildroot.go:189] setting minikube options for container-runtime
I0921 15:26:19.024951 10389 config.go:180] Loaded profile config "false-20220921151637-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:19.024965 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.025081 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.025169 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.025260 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.025332 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.025427 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.025536 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.025635 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.025643 10389 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0921 15:26:19.103232 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0921 15:26:19.103245 10389 buildroot.go:70] root file system type: tmpfs
I0921 15:26:19.103367 10389 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0921 15:26:19.103382 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.103506 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.103596 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.103680 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.103774 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.103895 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.103995 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.104045 10389 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0921 15:26:19.189517 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0921 15:26:19.189540 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.189677 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.189768 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.189857 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.189943 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.190071 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.190182 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.190195 10389 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0921 15:26:19.657263 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0921 15:26:19.657285 10389 main.go:134] libmachine: Checking connection to Docker...
I0921 15:26:19.657293 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetURL
I0921 15:26:19.657424 10389 main.go:134] libmachine: Docker is up and running!
I0921 15:26:19.657433 10389 main.go:134] libmachine: Reticulating splines...
I0921 15:26:19.657441 10389 client.go:171] LocalClient.Create took 10.876166724s
I0921 15:26:19.657453 10389 start.go:167] duration metric: libmachine.API.Create for "false-20220921151637-3535" took 10.876232302s
I0921 15:26:19.657465 10389 start.go:300] post-start starting for "false-20220921151637-3535" (driver="hyperkit")
I0921 15:26:19.657470 10389 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0921 15:26:19.657481 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.657606 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0921 15:26:19.657623 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.657718 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.657815 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.657900 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.657993 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.701002 10389 ssh_runner.go:195] Run: cat /etc/os-release
I0921 15:26:19.703660 10389 info.go:137] Remote host: Buildroot 2021.02.12
I0921 15:26:19.703675 10389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/addons for local assets ...
I0921 15:26:19.703763 10389 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files for local assets ...
I0921 15:26:19.703898 10389 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem -> 35352.pem in /etc/ssl/certs
I0921 15:26:19.704044 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0921 15:26:19.710387 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:19.725495 10389 start.go:303] post-start completed in 68.018939ms
I0921 15:26:19.725521 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetConfigRaw
I0921 15:26:19.726077 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:19.726225 10389 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/config.json ...
I0921 15:26:19.726508 10389 start.go:128] duration metric: createHost completed in 10.995583539s
I0921 15:26:19.726524 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.726609 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.726688 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.726756 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.726824 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.726940 10389 main.go:134] libmachine: Using SSH client type: native
I0921 15:26:19.727032 10389 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e5c40] 0x13e8dc0 <nil> [] 0s} 192.168.64.30 22 <nil> <nil>}
I0921 15:26:19.727039 10389 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0921 15:26:19.803566 10389 main.go:134] libmachine: SSH cmd err, output: <nil>: 1663799179.904471962
I0921 15:26:19.803578 10389 fix.go:207] guest clock: 1663799179.904471962
I0921 15:26:19.803583 10389 fix.go:220] Guest: 2022-09-21 15:26:19.904471962 -0700 PDT Remote: 2022-09-21 15:26:19.726515 -0700 PDT m=+11.397811697 (delta=177.956962ms)
I0921 15:26:19.803600 10389 fix.go:191] guest clock delta is within tolerance: 177.956962ms
I0921 15:26:19.803604 10389 start.go:83] releasing machines lock for "false-20220921151637-3535", held for 11.072844405s
I0921 15:26:19.803620 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.803781 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetIP
I0921 15:26:19.803886 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.803980 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804107 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804405 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804511 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:19.804569 10389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0921 15:26:19.804599 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.804676 10389 ssh_runner.go:195] Run: systemctl --version
I0921 15:26:19.804691 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:19.804696 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.804788 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.804809 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:19.804910 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.804933 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:19.804984 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.805022 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:19.805139 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:19.847227 10389 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:19.847314 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:19.886987 10389 docker.go:611] Got preloaded images:
I0921 15:26:19.887002 10389 docker.go:617] registry.k8s.io/kube-apiserver:v1.25.2 wasn't preloaded
I0921 15:26:19.887058 10389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0921 15:26:19.893540 10389 ssh_runner.go:195] Run: which lz4
I0921 15:26:19.895930 10389 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0921 15:26:19.898413 10389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0921 15:26:19.898432 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (404136294 bytes)
I0921 15:26:21.239426 10389 docker.go:576] Took 1.343526 seconds to copy over tarball
I0921 15:26:21.239490 10389 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
I0921 15:26:24.582087 10389 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.342576242s)
I0921 15:26:24.582101 10389 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0921 15:26:24.608006 10389 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I0921 15:26:24.614121 10389 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2628 bytes)
I0921 15:26:24.625086 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:24.705194 10389 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:25.931663 10389 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.226446575s)
I0921 15:26:25.931758 10389 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0921 15:26:25.941064 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0921 15:26:25.952201 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:25.960686 10389 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0921 15:26:25.983070 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0921 15:26:25.991760 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0921 15:26:26.004137 10389 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0921 15:26:26.084992 10389 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0921 15:26:26.179551 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:26.278839 10389 ssh_runner.go:195] Run: sudo systemctl restart docker
I0921 15:26:27.498830 10389 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.219969179s)
I0921 15:26:27.498903 10389 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0921 15:26:27.582227 10389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:27.670077 10389 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0921 15:26:27.680350 10389 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0921 15:26:27.680426 10389 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0921 15:26:27.684229 10389 start.go:471] Will wait 60s for crictl version
I0921 15:26:27.684283 10389 ssh_runner.go:195] Run: sudo crictl version
I0921 15:26:27.710285 10389 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.18
RuntimeApiVersion: 1.41.0
I0921 15:26:27.710350 10389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:27.730543 10389 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:27.776346 10389 out.go:204] * Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
I0921 15:26:27.776499 10389 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0921 15:26:27.779532 10389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0921 15:26:27.786983 10389 localpath.go:92] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/client.crt -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.crt
I0921 15:26:27.787207 10389 localpath.go:117] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/client.key -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.key
I0921 15:26:27.787377 10389 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:27.787423 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:27.803222 10389 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:27.803238 10389 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:27.803305 10389 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:27.818382 10389 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:27.818399 10389 cache_images.go:84] Images are preloaded, skipping loading
I0921 15:26:27.818461 10389 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0921 15:26:27.839813 10389 cni.go:95] Creating CNI manager for "false"
I0921 15:26:27.839834 10389 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0921 15:26:27.839848 10389 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.30 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20220921151637-3535 NodeName:false-20220921151637-3535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.30"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.30 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0921 15:26:27.839927 10389 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.30
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "false-20220921151637-3535"
kubeletExtraArgs:
node-ip: 192.168.64.30
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.30"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0921 15:26:27.839993 10389 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=false-20220921151637-3535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.30 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.2 ClusterName:false-20220921151637-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
I0921 15:26:27.840044 10389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
I0921 15:26:27.846485 10389 binaries.go:44] Found k8s binaries, skipping transfer
I0921 15:26:27.846528 10389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0921 15:26:27.852711 10389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0921 15:26:27.863719 10389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0921 15:26:27.874539 10389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0921 15:26:27.885620 10389 ssh_runner.go:195] Run: grep 192.168.64.30 control-plane.minikube.internal$ /etc/hosts
I0921 15:26:27.887836 10389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.30 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0921 15:26:27.895111 10389 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535 for IP: 192.168.64.30
I0921 15:26:27.895206 10389 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
I0921 15:26:27.895255 10389 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
I0921 15:26:27.895337 10389 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/client.key
I0921 15:26:27.895361 10389 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b
I0921 15:26:27.895377 10389 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b with IP's: [192.168.64.30 10.96.0.1 127.0.0.1 10.0.0.1]
I0921 15:26:28.090626 10389 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b ...
I0921 15:26:28.090639 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b: {Name:mkd0021f0880c17472bc34f2bb7b8af87d7a861d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.090958 10389 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b ...
I0921 15:26:28.090971 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b: {Name:mk0105b4976084bcdc477e16d22340c1f19a3c15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.091184 10389 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt.8d1fc39b -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt
I0921 15:26:28.091356 10389 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key.8d1fc39b -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key
I0921 15:26:28.091534 10389 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key
I0921 15:26:28.091547 10389 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt with IP's: []
I0921 15:26:28.128749 10389 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt ...
I0921 15:26:28.128759 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt: {Name:mkb235bcbbe39e8b7fc7fa2af71bd625a04514fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.129197 10389 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key ...
I0921 15:26:28.129204 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key: {Name:mkc7b1d50dce94488cf946b55e321c2fd8195b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:28.129644 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem (1338 bytes)
W0921 15:26:28.129681 10389 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535_empty.pem, impossibly tiny 0 bytes
I0921 15:26:28.129689 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
I0921 15:26:28.129738 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
I0921 15:26:28.129767 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
I0921 15:26:28.129794 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1679 bytes)
I0921 15:26:28.129854 10389 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:28.130421 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0921 15:26:28.147670 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0921 15:26:28.163433 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0921 15:26:28.178707 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/false-20220921151637-3535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0921 15:26:28.193799 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0921 15:26:28.208841 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0921 15:26:28.224170 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0921 15:26:28.239235 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0921 15:26:28.254997 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0921 15:26:28.270476 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem --> /usr/share/ca-certificates/3535.pem (1338 bytes)
I0921 15:26:28.285761 10389 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /usr/share/ca-certificates/35352.pem (1708 bytes)
I0921 15:26:28.300863 10389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0921 15:26:28.311541 10389 ssh_runner.go:195] Run: openssl version
I0921 15:26:28.314918 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35352.pem && ln -fs /usr/share/ca-certificates/35352.pem /etc/ssl/certs/35352.pem"
I0921 15:26:28.322006 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35352.pem
I0921 15:26:28.324825 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:31 /usr/share/ca-certificates/35352.pem
I0921 15:26:28.324854 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35352.pem
I0921 15:26:28.328317 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35352.pem /etc/ssl/certs/3ec20f2e.0"
I0921 15:26:28.335399 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0921 15:26:28.342321 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.345213 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:27 /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.345248 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:28.348680 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0921 15:26:28.355668 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3535.pem && ln -fs /usr/share/ca-certificates/3535.pem /etc/ssl/certs/3535.pem"
I0921 15:26:28.362704 10389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3535.pem
I0921 15:26:28.365564 10389 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:31 /usr/share/ca-certificates/3535.pem
I0921 15:26:28.365597 10389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3535.pem
I0921 15:26:28.369054 10389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3535.pem /etc/ssl/certs/51391683.0"
I0921 15:26:28.375971 10389 kubeadm.go:396] StartCluster: {Name:false-20220921151637-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:false-20220921151637
-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.30 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:28.393673 10389 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:26:28.410852 10389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0921 15:26:28.417363 10389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0921 15:26:28.423501 10389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0921 15:26:28.429757 10389 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0921 15:26:28.429778 10389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
I0921 15:26:28.485563 10389 kubeadm.go:317] [init] Using Kubernetes version: v1.25.2
I0921 15:26:28.485628 10389 kubeadm.go:317] [preflight] Running pre-flight checks
I0921 15:26:28.613102 10389 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I0921 15:26:28.613192 10389 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0921 15:26:28.613262 10389 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0921 15:26:28.713134 10389 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0921 15:26:29.173173 10408 ssh_runner.go:235] Completed: sudo systemctl restart docker: (8.055980768s)
I0921 15:26:29.173240 10408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0921 15:26:29.288535 10408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0921 15:26:29.417731 10408 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0921 15:26:29.433270 10408 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0921 15:26:29.433356 10408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0921 15:26:29.447293 10408 start.go:471] Will wait 60s for crictl version
I0921 15:26:29.447353 10408 ssh_runner.go:195] Run: sudo crictl version
I0921 15:26:29.482799 10408 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.18
RuntimeApiVersion: 1.41.0
I0921 15:26:29.482858 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.651357 10408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0921 15:26:29.808439 10408 out.go:204] * Preparing Kubernetes v1.25.2 on Docker 20.10.18 ...
I0921 15:26:29.808534 10408 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0921 15:26:29.818111 10408 preload.go:132] Checking if preload exists for k8s version v1.25.2 and runtime docker
I0921 15:26:29.818177 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.873620 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.873633 10408 docker.go:542] Images already preloaded, skipping extraction
I0921 15:26:29.873699 10408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0921 15:26:29.929931 10408 docker.go:611] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.2
registry.k8s.io/kube-scheduler:v1.25.2
registry.k8s.io/kube-controller-manager:v1.25.2
registry.k8s.io/kube-proxy:v1.25.2
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0921 15:26:29.929952 10408 cache_images.go:84] Images are preloaded, skipping loading
I0921 15:26:29.930056 10408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0921 15:26:30.064287 10408 cni.go:95] Creating CNI manager for ""
I0921 15:26:30.064305 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:26:30.064320 10408 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0921 15:26:30.064331 10408 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.28 APIServerPort:8443 KubernetesVersion:v1.25.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220921152522-3535 NodeName:pause-20220921152522-3535 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.28 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I0921 15:26:30.064423 10408 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.28
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220921152522-3535"
kubeletExtraArgs:
node-ip: 192.168.64.28
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.28"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0921 15:26:30.064505 10408 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220921152522-3535 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.28 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0921 15:26:30.064579 10408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.2
I0921 15:26:30.076550 10408 binaries.go:44] Found k8s binaries, skipping transfer
I0921 15:26:30.076638 10408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0921 15:26:30.090012 10408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
I0921 15:26:30.137803 10408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0921 15:26:30.178146 10408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
I0921 15:26:30.203255 10408 ssh_runner.go:195] Run: grep 192.168.64.28 control-plane.minikube.internal$ /etc/hosts
I0921 15:26:30.209779 10408 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535 for IP: 192.168.64.28
I0921 15:26:30.209879 10408 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key
I0921 15:26:30.209934 10408 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key
I0921 15:26:30.210019 10408 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.key
I0921 15:26:30.210082 10408 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key.6733b561
I0921 15:26:30.210133 10408 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key
I0921 15:26:30.210333 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem (1338 bytes)
W0921 15:26:30.210375 10408 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535_empty.pem, impossibly tiny 0 bytes
I0921 15:26:30.210388 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca-key.pem (1679 bytes)
I0921 15:26:30.210421 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/ca.pem (1078 bytes)
I0921 15:26:30.210453 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/cert.pem (1123 bytes)
I0921 15:26:30.210483 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/key.pem (1679 bytes)
I0921 15:26:30.210550 10408 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem (1708 bytes)
I0921 15:26:30.211086 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0921 15:26:30.279069 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0921 15:26:30.343250 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0921 15:26:30.413180 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0921 15:26:30.448798 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0921 15:26:30.476175 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0921 15:26:30.497204 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0921 15:26:30.524103 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0921 15:26:30.558966 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/files/etc/ssl/certs/35352.pem --> /usr/share/ca-certificates/35352.pem (1708 bytes)
I0921 15:26:30.576319 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0921 15:26:30.592912 10408 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/certs/3535.pem --> /usr/share/ca-certificates/3535.pem (1338 bytes)
I0921 15:26:30.609099 10408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0921 15:26:30.627179 10408 ssh_runner.go:195] Run: openssl version
I0921 15:26:30.632801 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3535.pem && ln -fs /usr/share/ca-certificates/3535.pem /etc/ssl/certs/3535.pem"
I0921 15:26:30.641473 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645794 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Sep 21 21:31 /usr/share/ca-certificates/3535.pem
I0921 15:26:30.645836 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3535.pem
I0921 15:26:30.649794 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3535.pem /etc/ssl/certs/51391683.0"
I0921 15:26:30.657630 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35352.pem && ln -fs /usr/share/ca-certificates/35352.pem /etc/ssl/certs/35352.pem"
I0921 15:26:30.665747 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669804 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Sep 21 21:31 /usr/share/ca-certificates/35352.pem
I0921 15:26:30.669850 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35352.pem
I0921 15:26:30.679638 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35352.pem /etc/ssl/certs/3ec20f2e.0"
I0921 15:26:30.700907 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0921 15:26:30.734369 10408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762750 10408 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Sep 21 21:27 /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.762827 10408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0921 15:26:30.777627 10408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0921 15:26:30.785856 10408 kubeadm.go:396] StartCluster: {Name:pause-20220921152522-3535 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.27.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.34@sha256:f2a1e577e43fd6769f35cdb938f6d21c3dacfd763062d119cade738fa244720c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.2 ClusterName:pause-20220921152522
-3535 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0921 15:26:30.785963 10408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:26:30.816264 10408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0921 15:26:30.823179 10408 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0921 15:26:30.823195 10408 kubeadm.go:627] restartCluster start
I0921 15:26:30.823236 10408 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0921 15:26:30.837045 10408 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0921 15:26:30.837457 10408 kubeconfig.go:92] found "pause-20220921152522-3535" server: "https://192.168.64.28:8443"
I0921 15:26:30.837839 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:26:30.838375 10408 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0921 15:26:30.852535 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:30.852588 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:30.868059 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:30.876185 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:30.876238 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:30.912452 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:30.912472 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:28.751035 10389 out.go:204] - Generating certificates and keys ...
I0921 15:26:28.751152 10389 kubeadm.go:317] [certs] Using existing ca certificate authority
I0921 15:26:28.751236 10389 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I0921 15:26:28.782482 10389 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I0921 15:26:29.137189 10389 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I0921 15:26:29.241745 10389 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I0921 15:26:29.350166 10389 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I0921 15:26:29.505698 10389 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I0921 15:26:29.505932 10389 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [false-20220921151637-3535 localhost] and IPs [192.168.64.30 127.0.0.1 ::1]
I0921 15:26:29.604706 10389 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I0921 15:26:29.604909 10389 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [false-20220921151637-3535 localhost] and IPs [192.168.64.30 127.0.0.1 ::1]
I0921 15:26:29.834088 10389 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I0921 15:26:29.943628 10389 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I0921 15:26:30.177452 10389 kubeadm.go:317] [certs] Generating "sa" key and public key
I0921 15:26:30.177562 10389 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0921 15:26:30.679764 10389 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I0921 15:26:30.762950 10389 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0921 15:26:30.975611 10389 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0921 15:26:31.368343 10389 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0921 15:26:31.380985 10389 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0921 15:26:31.381763 10389 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0921 15:26:31.381810 10389 kubeadm.go:317] [kubelet-start] Starting the kubelet
I0921 15:26:31.468060 10389 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0921 15:26:31.487973 10389 out.go:204] - Booting up control plane ...
I0921 15:26:31.488058 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0921 15:26:31.488140 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0921 15:26:31.488216 10389 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0921 15:26:31.488288 10389 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0921 15:26:31.488408 10389 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0921 15:26:35.914013 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:35.914061 10408 retry.go:31] will retry after 263.082536ms: state is "Stopped"
I0921 15:26:36.179260 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:41.180983 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:41.181007 10408 retry.go:31] will retry after 381.329545ms: state is "Stopped"
I0921 15:26:43.469751 10389 kubeadm.go:317] [apiclient] All control plane components are healthy after 12.003918 seconds
I0921 15:26:43.469852 10389 kubeadm.go:317] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0921 15:26:43.477591 10389 kubeadm.go:317] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0921 15:26:44.989240 10389 kubeadm.go:317] [upload-certs] Skipping phase. Please see --upload-certs
I0921 15:26:44.989436 10389 kubeadm.go:317] [mark-control-plane] Marking the node false-20220921151637-3535 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0921 15:26:45.496387 10389 kubeadm.go:317] [bootstrap-token] Using token: gw23ty.315hs4knjisv0ijr
I0921 15:26:41.563913 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:45.534959 10389 out.go:204] - Configuring RBAC rules ...
I0921 15:26:45.535164 10389 kubeadm.go:317] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0921 15:26:45.535348 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0921 15:26:45.575312 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0921 15:26:45.577832 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0921 15:26:45.580659 10389 kubeadm.go:317] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0921 15:26:45.582707 10389 kubeadm.go:317] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0921 15:26:45.589329 10389 kubeadm.go:317] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0921 15:26:45.765645 10389 kubeadm.go:317] [addons] Applied essential addon: CoreDNS
I0921 15:26:45.903347 10389 kubeadm.go:317] [addons] Applied essential addon: kube-proxy
I0921 15:26:45.903987 10389 kubeadm.go:317]
I0921 15:26:45.904052 10389 kubeadm.go:317] Your Kubernetes control-plane has initialized successfully!
I0921 15:26:45.904063 10389 kubeadm.go:317]
I0921 15:26:45.904125 10389 kubeadm.go:317] To start using your cluster, you need to run the following as a regular user:
I0921 15:26:45.904133 10389 kubeadm.go:317]
I0921 15:26:45.904151 10389 kubeadm.go:317] mkdir -p $HOME/.kube
I0921 15:26:45.904270 10389 kubeadm.go:317] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0921 15:26:45.904382 10389 kubeadm.go:317] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0921 15:26:45.904399 10389 kubeadm.go:317]
I0921 15:26:45.904507 10389 kubeadm.go:317] Alternatively, if you are the root user, you can run:
I0921 15:26:45.904518 10389 kubeadm.go:317]
I0921 15:26:45.904599 10389 kubeadm.go:317] export KUBECONFIG=/etc/kubernetes/admin.conf
I0921 15:26:45.904608 10389 kubeadm.go:317]
I0921 15:26:45.904652 10389 kubeadm.go:317] You should now deploy a pod network to the cluster.
I0921 15:26:45.904743 10389 kubeadm.go:317] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0921 15:26:45.904821 10389 kubeadm.go:317] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0921 15:26:45.904853 10389 kubeadm.go:317]
I0921 15:26:45.904929 10389 kubeadm.go:317] You can now join any number of control-plane nodes by copying certificate authorities
I0921 15:26:45.905009 10389 kubeadm.go:317] and service account keys on each node and then running the following as root:
I0921 15:26:45.905013 10389 kubeadm.go:317]
I0921 15:26:45.905081 10389 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token gw23ty.315hs4knjisv0ijr \
I0921 15:26:45.905165 10389 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:706daf9048108456ab2312c550f8f0627aeca112971c3da5a874015a0cee155c \
I0921 15:26:45.905182 10389 kubeadm.go:317] --control-plane
I0921 15:26:45.905187 10389 kubeadm.go:317]
I0921 15:26:45.905254 10389 kubeadm.go:317] Then you can join any number of worker nodes by running the following on each as root:
I0921 15:26:45.905261 10389 kubeadm.go:317]
I0921 15:26:45.905329 10389 kubeadm.go:317] kubeadm join control-plane.minikube.internal:8443 --token gw23ty.315hs4knjisv0ijr \
I0921 15:26:45.905405 10389 kubeadm.go:317] --discovery-token-ca-cert-hash sha256:706daf9048108456ab2312c550f8f0627aeca112971c3da5a874015a0cee155c
I0921 15:26:45.906103 10389 kubeadm.go:317] W0921 22:26:28.588830 1256 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0921 15:26:45.906192 10389 kubeadm.go:317] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0921 15:26:45.906207 10389 cni.go:95] Creating CNI manager for "false"
I0921 15:26:45.906225 10389 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0921 15:26:45.906290 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:45.906301 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl label nodes minikube.k8s.io/version=v1.27.0 minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4 minikube.k8s.io/name=false-20220921151637-3535 minikube.k8s.io/updated_at=2022_09_21T15_26_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.087744 10389 ops.go:34] apiserver oom_adj: -16
I0921 15:26:46.087768 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.661358 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:47.163233 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:47.661991 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:48.162015 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:46.564586 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:26:46.766257 10408 api_server.go:165] Checking apiserver status ...
I0921 15:26:46.766358 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:26:46.776615 10408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4520/cgroup
I0921 15:26:46.782756 10408 api_server.go:181] apiserver freezer: "2:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope"
I0921 15:26:46.782801 10408 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc22aaa89e8234f176d6344e50152f4.slice/docker-3a4741e1fe3c0996cab4975bd514e9991794f86cf96c9fe0863c714a6d86e26c.scope/freezer.state
I0921 15:26:46.789298 10408 api_server.go:203] freezer state: "THAWED"
I0921 15:26:46.789309 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.288815 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": read tcp 192.168.64.1:52998->192.168.64.28:8443: read: connection reset by peer
I0921 15:26:51.288848 10408 retry.go:31] will retry after 242.214273ms: state is "Stopped"
I0921 15:26:48.662979 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:49.163023 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:49.662057 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:50.162176 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:50.663300 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.162051 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.661237 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:52.161318 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:52.663231 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:53.162177 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:51.532207 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:51.632400 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:51.632425 10408 retry.go:31] will retry after 300.724609ms: state is "Stopped"
I0921 15:26:51.934415 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.035144 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.035176 10408 retry.go:31] will retry after 427.113882ms: state is "Stopped"
I0921 15:26:52.464328 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:52.566391 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:52.566426 10408 retry.go:31] will retry after 382.2356ms: state is "Stopped"
I0921 15:26:52.948987 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.049570 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.049605 10408 retry.go:31] will retry after 505.529557ms: state is "Stopped"
I0921 15:26:53.556334 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:53.658245 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:53.658268 10408 retry.go:31] will retry after 609.195524ms: state is "Stopped"
I0921 15:26:54.269593 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:54.371296 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:54.371340 10408 retry.go:31] will retry after 858.741692ms: state is "Stopped"
I0921 15:26:55.230116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:55.331214 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:55.331251 10408 retry.go:31] will retry after 1.201160326s: state is "Stopped"
I0921 15:26:53.661186 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:54.163293 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:54.661188 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:55.161203 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:55.661768 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:56.161278 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:56.661209 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:57.161293 10389 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0921 15:26:57.227024 10389 kubeadm.go:1067] duration metric: took 11.320770189s to wait for elevateKubeSystemPrivileges.
I0921 15:26:57.227047 10389 kubeadm.go:398] StartCluster complete in 28.851048117s
I0921 15:26:57.227062 10389 settings.go:142] acquiring lock: {Name:mkb00f1de0b91d8f67bd982eab088d27845674b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:57.227132 10389 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:26:57.227768 10389 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mka2f83e1cbd4124ff7179732fbb172d977cf2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:26:57.740783 10389 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20220921151637-3535" rescaled to 1
I0921 15:26:57.740812 10389 start.go:211] Will wait 5m0s for node &{Name: IP:192.168.64.30 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0921 15:26:57.740821 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0921 15:26:57.740854 10389 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0921 15:26:57.740962 10389 config.go:180] Loaded profile config "false-20220921151637-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:26:57.786566 10389 addons.go:65] Setting storage-provisioner=true in profile "false-20220921151637-3535"
I0921 15:26:57.786585 10389 addons.go:153] Setting addon storage-provisioner=true in "false-20220921151637-3535"
I0921 15:26:57.786585 10389 addons.go:65] Setting default-storageclass=true in profile "false-20220921151637-3535"
I0921 15:26:57.786492 10389 out.go:177] * Verifying Kubernetes components...
W0921 15:26:57.786593 10389 addons.go:162] addon storage-provisioner should already be in state true
I0921 15:26:57.786605 10389 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20220921151637-3535"
I0921 15:26:57.786637 10389 host.go:66] Checking if "false-20220921151637-3535" exists ...
I0921 15:26:57.823578 10389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:26:57.824055 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.824059 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.824098 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.824128 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.831913 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53008
I0921 15:26:57.831981 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53009
I0921 15:26:57.832340 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.832352 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.832684 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.832694 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.832700 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.832713 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.832896 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.832944 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.832993 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.833084 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.833170 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.833345 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.833360 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.839848 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53012
I0921 15:26:57.840218 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.840571 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.840590 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.840793 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.840888 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.840964 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.841057 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.841584 10389 addons.go:153] Setting addon default-storageclass=true in "false-20220921151637-3535"
W0921 15:26:57.841596 10389 addons.go:162] addon default-storageclass should already be in state true
I0921 15:26:57.841612 10389 host.go:66] Checking if "false-20220921151637-3535" exists ...
I0921 15:26:57.841859 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.841874 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.841903 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:57.848370 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53014
I0921 15:26:57.879837 10389 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0921 15:26:57.853392 10389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0921 15:26:57.856801 10389 node_ready.go:35] waiting up to 5m0s for node "false-20220921151637-3535" to be "Ready" ...
I0921 15:26:57.880708 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.901652 10389 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:26:57.901674 10389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0921 15:26:57.901717 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:57.902040 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:57.902220 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.902228 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:57.902244 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.902481 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:57.902678 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.902711 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:57.903323 10389 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:26:57.903348 10389 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:26:57.907923 10389 node_ready.go:49] node "false-20220921151637-3535" has status "Ready":"True"
I0921 15:26:57.907937 10389 node_ready.go:38] duration metric: took 6.436476ms waiting for node "false-20220921151637-3535" to be "Ready" ...
I0921 15:26:57.907943 10389 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:26:57.910202 10389 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53017
I0921 15:26:57.910546 10389 main.go:134] libmachine: () Calling .GetVersion
I0921 15:26:57.910873 10389 main.go:134] libmachine: Using API Version 1
I0921 15:26:57.910889 10389 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:26:57.911076 10389 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:26:57.911170 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetState
I0921 15:26:57.911256 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:26:57.911338 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | hyperkit pid from json: 10400
I0921 15:26:57.912159 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .DriverName
I0921 15:26:57.912315 10389 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0921 15:26:57.912323 10389 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0921 15:26:57.912331 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHHostname
I0921 15:26:57.912418 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHPort
I0921 15:26:57.912497 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHKeyPath
I0921 15:26:57.912584 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .GetSSHUsername
I0921 15:26:57.912659 10389 sshutil.go:53] new ssh client: &{IP:192.168.64.30 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/false-20220921151637-3535/id_rsa Username:docker}
I0921 15:26:57.919652 10389 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-pns2v" in "kube-system" namespace to be "Ready" ...
I0921 15:26:58.008677 10389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0921 15:26:58.015955 10389 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:26:59.137018 10389 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.235523727s)
I0921 15:26:59.137048 10389 start.go:810] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS
I0921 15:26:59.214166 10389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.198193011s)
I0921 15:26:59.214197 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214212 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214261 10389 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.205563718s)
I0921 15:26:59.214276 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214283 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214398 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.214419 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.214438 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214449 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214452 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214458 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214464 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214465 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214473 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214483 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214582 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214593 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214605 10389 main.go:134] libmachine: Making call to close driver server
I0921 15:26:59.214615 10389 main.go:134] libmachine: (false-20220921151637-3535) Calling .Close
I0921 15:26:59.214655 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214663 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214784 10389 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:26:59.214810 10389 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:26:59.214847 10389 main.go:134] libmachine: (false-20220921151637-3535) DBG | Closing plugin on server side
I0921 15:26:59.257530 10389 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0921 15:26:56.533116 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:56.635643 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:56.635670 10408 retry.go:31] will retry after 1.723796097s: state is "Stopped"
I0921 15:26:58.359704 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:26:58.461478 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:26:58.461505 10408 retry.go:31] will retry after 1.596532639s: state is "Stopped"
I0921 15:27:00.059136 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:00.159945 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": dial tcp 192.168.64.28:8443: connect: connection refused
I0921 15:27:00.159971 10408 api_server.go:165] Checking apiserver status ...
I0921 15:27:00.160018 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0921 15:27:00.169632 10408 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0921 15:27:00.169647 10408 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I0921 15:27:00.169656 10408 kubeadm.go:1114] stopping kube-system containers ...
I0921 15:27:00.169722 10408 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0921 15:27:00.201882 10408 docker.go:443] Stopping containers: [d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49]
I0921 15:27:00.201952 10408 ssh_runner.go:195] Run: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49
I0921 15:26:59.279382 10389 addons.go:414] enableAddons completed in 1.538525769s
I0921 15:26:59.940505 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:02.438511 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:05.344188 10408 ssh_runner.go:235] Completed: docker stop d7cbc4c453b0 823942ffecb6 283fac289f86 c2e8fe8419a9 4934b6e15931 3a4741e1fe3c e1129956136e 3d0143698c2d 163c82f50ebf 994dd806c8bf eb1318ed7bcc 1a3e01fca571 5fc70456f2e3 54e273754edc 52c58a26f4cc 4ad5f51c22d6 3ac721feff71 bf1833cd9ccb 532325020c06 7d83f8f7d4ba b943e6acece0 25c3a0228e49: (5.142213633s)
I0921 15:27:05.344244 10408 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0921 15:27:05.419551 10408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0921 15:27:05.433375 10408 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Sep 21 22:25 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Sep 21 22:25 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2039 Sep 21 22:25 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Sep 21 22:25 /etc/kubernetes/scheduler.conf
I0921 15:27:05.433432 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0921 15:27:05.439704 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0921 15:27:05.445874 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0921 15:27:05.453215 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.453270 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0921 15:27:05.459417 10408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0921 15:27:05.465309 10408 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0921 15:27:05.465358 10408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0921 15:27:05.476008 10408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484410 10408 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0921 15:27:05.484426 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:05.534434 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:04.440960 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:06.941172 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:06.469884 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.628867 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.698897 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:06.759299 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:06.759353 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:06.778540 10408 api_server.go:71] duration metric: took 19.241402ms to wait for apiserver process to appear ...
I0921 15:27:06.778552 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:06.778559 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:09.441803 10389 pod_ready.go:102] pod "coredns-565d847f94-pns2v" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:09.938218 10389 pod_ready.go:97] error getting pod "coredns-565d847f94-pns2v" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-pns2v" not found
I0921 15:27:09.938237 10389 pod_ready.go:81] duration metric: took 12.018553938s waiting for pod "coredns-565d847f94-pns2v" in "kube-system" namespace to be "Ready" ...
E0921 15:27:09.938247 10389 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-565d847f94-pns2v" in "kube-system" namespace (skipping!): pods "coredns-565d847f94-pns2v" not found
I0921 15:27:09.938253 10389 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-wwhtk" in "kube-system" namespace to be "Ready" ...
I0921 15:27:11.950940 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:11.780440 10408 api_server.go:256] stopped: https://192.168.64.28:8443/healthz: Get "https://192.168.64.28:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0921 15:27:12.280518 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.000183 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0921 15:27:14.000198 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0921 15:27:14.282668 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.289281 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.289293 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:14.780762 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:14.786529 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0921 15:27:14.786540 10408 api_server.go:102] status: https://192.168.64.28:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0921 15:27:15.280930 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:15.288106 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:15.292969 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:15.292981 10408 api_server.go:130] duration metric: took 8.514415313s to wait for apiserver health ...
I0921 15:27:15.292986 10408 cni.go:95] Creating CNI manager for ""
I0921 15:27:15.292994 10408 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0921 15:27:15.293004 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:15.298309 10408 system_pods.go:59] 6 kube-system pods found
I0921 15:27:15.298324 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:15.298330 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0921 15:27:15.298335 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0921 15:27:15.298340 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0921 15:27:15.298344 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:15.298348 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0921 15:27:15.298352 10408 system_pods.go:74] duration metric: took 5.344262ms to wait for pod list to return data ...
I0921 15:27:15.298357 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:15.300304 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:15.300319 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:15.300328 10408 node_conditions.go:105] duration metric: took 1.967816ms to run NodePressure ...
I0921 15:27:15.300342 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0921 15:27:15.402185 10408 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0921 15:27:15.405062 10408 kubeadm.go:778] kubelet initialised
I0921 15:27:15.405072 10408 kubeadm.go:779] duration metric: took 2.873657ms waiting for restarted kubelet to initialise ...
I0921 15:27:15.405080 10408 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:15.408132 10408 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411452 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:15.411459 10408 pod_ready.go:81] duration metric: took 3.317632ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:15.411465 10408 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:14.445892 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:16.945831 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:17.420289 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:19.421503 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:18.946719 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:20.947256 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:22.950309 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:21.919889 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:24.419226 10408 pod_ready.go:102] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:25.920028 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.920043 10408 pod_ready.go:81] duration metric: took 10.508561161s waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.920049 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923063 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.923071 10408 pod_ready.go:81] duration metric: took 3.017613ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.923077 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926284 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.926292 10408 pod_ready.go:81] duration metric: took 3.20987ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.926297 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929448 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.929456 10408 pod_ready.go:81] duration metric: took 3.154194ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.929461 10408 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932599 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:25.932606 10408 pod_ready.go:81] duration metric: took 3.140486ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:25.932610 10408 pod_ready.go:38] duration metric: took 10.527510396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:25.932619 10408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0921 15:27:25.939997 10408 ops.go:34] apiserver oom_adj: -16
I0921 15:27:25.940008 10408 kubeadm.go:631] restartCluster took 55.116747244s
I0921 15:27:25.940013 10408 kubeadm.go:398] StartCluster complete in 55.154103553s
I0921 15:27:25.940027 10408 settings.go:142] acquiring lock: {Name:mkb00f1de0b91d8f67bd982eab088d27845674b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.940102 10408 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig
I0921 15:27:25.941204 10408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/kubeconfig: {Name:mka2f83e1cbd4124ff7179732fbb172d977cf2f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0921 15:27:25.942042 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:25.944188 10408 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220921152522-3535" rescaled to 1
I0921 15:27:25.944221 10408 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.28 Port:8443 KubernetesVersion:v1.25.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0921 15:27:25.944255 10408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0921 15:27:25.944277 10408 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0921 15:27:25.944378 10408 config.go:180] Loaded profile config "pause-20220921152522-3535": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.2
I0921 15:27:25.967437 10408 addons.go:65] Setting storage-provisioner=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967440 10408 addons.go:65] Setting default-storageclass=true in profile "pause-20220921152522-3535"
I0921 15:27:25.967359 10408 out.go:177] * Verifying Kubernetes components...
I0921 15:27:25.967453 10408 addons.go:153] Setting addon storage-provisioner=true in "pause-20220921152522-3535"
I0921 15:27:25.967457 10408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220921152522-3535"
W0921 15:27:25.967460 10408 addons.go:162] addon storage-provisioner should already be in state true
I0921 15:27:26.012377 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:26.012436 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.012762 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012761 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.012794 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.012829 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.019897 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53028
I0921 15:27:26.020028 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53029
I0921 15:27:26.020328 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020394 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.020706 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020719 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020801 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.020817 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.020929 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021015 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.021115 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.021203 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.021283 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.021419 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.021443 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.023750 10408 kapi.go:59] client config for pause-20220921152522-3535: &rest.Config{Host:"https://192.168.64.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/profiles/pause-20220921152522-3535/cl
ient.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x233b400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0921 15:27:26.027574 10408 addons.go:153] Setting addon default-storageclass=true in "pause-20220921152522-3535"
W0921 15:27:26.027587 10408 addons.go:162] addon default-storageclass should already be in state true
I0921 15:27:26.027606 10408 host.go:66] Checking if "pause-20220921152522-3535" exists ...
I0921 15:27:26.027788 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53032
I0921 15:27:26.027854 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.027880 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.028560 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.029753 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.029767 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.030003 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.030113 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.030207 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.030282 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.031135 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.034331 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53034
I0921 15:27:26.055199 10408 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0921 15:27:26.038435 10408 node_ready.go:35] waiting up to 6m0s for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.038466 10408 start.go:790] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0921 15:27:26.055642 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.075151 10408 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.075161 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0921 15:27:26.075184 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.075306 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.075441 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.075451 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.075455 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.075546 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.075643 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.075669 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.076075 10408 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0921 15:27:26.076097 10408 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0921 15:27:26.082485 10408 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:53037
I0921 15:27:26.082858 10408 main.go:134] libmachine: () Calling .GetVersion
I0921 15:27:26.083217 10408 main.go:134] libmachine: Using API Version 1
I0921 15:27:26.083234 10408 main.go:134] libmachine: () Calling .SetConfigRaw
I0921 15:27:26.083443 10408 main.go:134] libmachine: () Calling .GetMachineName
I0921 15:27:26.083534 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetState
I0921 15:27:26.083608 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0921 15:27:26.083699 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | hyperkit pid from json: 10295
I0921 15:27:26.084503 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .DriverName
I0921 15:27:26.084648 10408 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.084657 10408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0921 15:27:26.084665 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHHostname
I0921 15:27:26.084734 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHPort
I0921 15:27:26.084830 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHKeyPath
I0921 15:27:26.084916 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .GetSSHUsername
I0921 15:27:26.085010 10408 sshutil.go:53] new ssh client: &{IP:192.168.64.28 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14995-2679-411d4579fd248fd57a4259437564c3e08f354535/.minikube/machines/pause-20220921152522-3535/id_rsa Username:docker}
I0921 15:27:26.117393 10408 node_ready.go:49] node "pause-20220921152522-3535" has status "Ready":"True"
I0921 15:27:26.117403 10408 node_ready.go:38] duration metric: took 42.373374ms waiting for node "pause-20220921152522-3535" to be "Ready" ...
I0921 15:27:26.117410 10408 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:26.127239 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0921 15:27:26.137634 10408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0921 15:27:26.319821 10408 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.697611 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697627 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697784 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697793 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697804 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.697809 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.697836 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.697938 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.697946 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.697962 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712622 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712636 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712825 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712834 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712839 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712844 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712846 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.712954 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.712962 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.712969 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.712973 10408 main.go:134] libmachine: Making call to close driver server
I0921 15:27:26.712981 10408 main.go:134] libmachine: (pause-20220921152522-3535) Calling .Close
I0921 15:27:26.713114 10408 main.go:134] libmachine: Successfully made call to close driver server
I0921 15:27:26.713128 10408 main.go:134] libmachine: Making call to close connection to plugin binary
I0921 15:27:26.713142 10408 main.go:134] libmachine: (pause-20220921152522-3535) DBG | Closing plugin on server side
I0921 15:27:26.735926 10408 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0921 15:27:25.446939 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:27.947781 10389 pod_ready.go:102] pod "coredns-565d847f94-wwhtk" in "kube-system" namespace has status "Ready":"False"
I0921 15:27:26.773142 10408 addons.go:414] enableAddons completed in 828.831417ms
I0921 15:27:26.776027 10408 pod_ready.go:92] pod "coredns-565d847f94-9wtnp" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:26.776040 10408 pod_ready.go:81] duration metric: took 456.205251ms waiting for pod "coredns-565d847f94-9wtnp" in "kube-system" namespace to be "Ready" ...
I0921 15:27:26.776049 10408 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117622 10408 pod_ready.go:92] pod "etcd-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.117632 10408 pod_ready.go:81] duration metric: took 341.577773ms waiting for pod "etcd-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.117638 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518637 10408 pod_ready.go:92] pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.518650 10408 pod_ready.go:81] duration metric: took 401.006674ms waiting for pod "kube-apiserver-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.518660 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918763 10408 pod_ready.go:92] pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:27.918778 10408 pod_ready.go:81] duration metric: took 400.10892ms waiting for pod "kube-controller-manager-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:27.918787 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318657 10408 pod_ready.go:92] pod "kube-proxy-5c7jc" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.318670 10408 pod_ready.go:81] duration metric: took 399.877205ms waiting for pod "kube-proxy-5c7jc" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.318678 10408 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720230 10408 pod_ready.go:92] pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace has status "Ready":"True"
I0921 15:27:28.720243 10408 pod_ready.go:81] duration metric: took 401.55845ms waiting for pod "kube-scheduler-pause-20220921152522-3535" in "kube-system" namespace to be "Ready" ...
I0921 15:27:28.720250 10408 pod_ready.go:38] duration metric: took 2.602830576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0921 15:27:28.720263 10408 api_server.go:51] waiting for apiserver process to appear ...
I0921 15:27:28.720316 10408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0921 15:27:28.729887 10408 api_server.go:71] duration metric: took 2.78564504s to wait for apiserver process to appear ...
I0921 15:27:28.729899 10408 api_server.go:87] waiting for apiserver healthz status ...
I0921 15:27:28.729905 10408 api_server.go:240] Checking apiserver healthz at https://192.168.64.28:8443/healthz ...
I0921 15:27:28.733744 10408 api_server.go:266] https://192.168.64.28:8443/healthz returned 200:
ok
I0921 15:27:28.734313 10408 api_server.go:140] control plane version: v1.25.2
I0921 15:27:28.734323 10408 api_server.go:130] duration metric: took 4.419338ms to wait for apiserver health ...
I0921 15:27:28.734328 10408 system_pods.go:43] waiting for kube-system pods to appear ...
I0921 15:27:28.920241 10408 system_pods.go:59] 7 kube-system pods found
I0921 15:27:28.920257 10408 system_pods.go:61] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:28.920261 10408 system_pods.go:61] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:28.920274 10408 system_pods.go:61] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:28.920279 10408 system_pods.go:61] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:28.920283 10408 system_pods.go:61] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:28.920286 10408 system_pods.go:61] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:28.920289 10408 system_pods.go:61] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:28.920294 10408 system_pods.go:74] duration metric: took 185.961163ms to wait for pod list to return data ...
I0921 15:27:28.920300 10408 default_sa.go:34] waiting for default service account to be created ...
I0921 15:27:29.119704 10408 default_sa.go:45] found service account: "default"
I0921 15:27:29.119720 10408 default_sa.go:55] duration metric: took 199.41576ms for default service account to be created ...
I0921 15:27:29.119727 10408 system_pods.go:116] waiting for k8s-apps to be running ...
I0921 15:27:29.322362 10408 system_pods.go:86] 7 kube-system pods found
I0921 15:27:29.322375 10408 system_pods.go:89] "coredns-565d847f94-9wtnp" [eb8f3bae-6107-4a2b-ba32-d79405830bf0] Running
I0921 15:27:29.322379 10408 system_pods.go:89] "etcd-pause-20220921152522-3535" [17c2d77b-b921-47a8-9a13-17620d5b88c8] Running
I0921 15:27:29.322383 10408 system_pods.go:89] "kube-apiserver-pause-20220921152522-3535" [0e89e308-e699-430a-9feb-d0b972291f03] Running
I0921 15:27:29.322388 10408 system_pods.go:89] "kube-controller-manager-pause-20220921152522-3535" [1e9f7576-ef69-4d06-b19d-0cf5fb9d0471] Running
I0921 15:27:29.322391 10408 system_pods.go:89] "kube-proxy-5c7jc" [1c5b06ea-f4c2-45b9-a80e-d85983bb3282] Running
I0921 15:27:29.322395 10408 system_pods.go:89] "kube-scheduler-pause-20220921152522-3535" [cb32a64b-32f0-46e6-8f1c-f2a3460c5fbb] Running
I0921 15:27:29.322398 10408 system_pods.go:89] "storage-provisioner" [f71f00f0-f421-45c2-bfe4-c1e99f11b8e5] Running
I0921 15:27:29.322402 10408 system_pods.go:126] duration metric: took 202.671392ms to wait for k8s-apps to be running ...
I0921 15:27:29.322407 10408 system_svc.go:44] waiting for kubelet service to be running ....
I0921 15:27:29.322452 10408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0921 15:27:29.331792 10408 system_svc.go:56] duration metric: took 9.381149ms WaitForService to wait for kubelet.
I0921 15:27:29.331804 10408 kubeadm.go:573] duration metric: took 3.387565971s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0921 15:27:29.331823 10408 node_conditions.go:102] verifying NodePressure condition ...
I0921 15:27:29.518084 10408 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0921 15:27:29.518100 10408 node_conditions.go:123] node cpu capacity is 2
I0921 15:27:29.518105 10408 node_conditions.go:105] duration metric: took 186.278888ms to run NodePressure ...
I0921 15:27:29.518113 10408 start.go:216] waiting for startup goroutines ...
I0921 15:27:29.551427 10408 start.go:506] kubectl: 1.25.0, cluster: 1.25.2 (minor skew: 0)
I0921 15:27:29.611327 10408 out.go:177] * Done! kubectl is now configured to use "pause-20220921152522-3535" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Wed 2022-09-21 22:25:29 UTC, ends at Wed 2022-09-21 22:27:33 UTC. --
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.405457988Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64651e97bf148aa1e9fbcad6bfbec4d1e8535ad920f0d5c47cd57190f6804445 pid=5990 runtime=io.containerd.runc.v2
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406210133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406245445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406253448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.406435610Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/207eee071672f5cc181475db6e621afacd6722bc026b03a3b344ad50e1cefc78 pid=5992 runtime=io.containerd.runc.v2
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422862395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422958571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.422967730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:07 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:07.423253250Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/534b0d7cd88d7c2d979cc7e5c6eb29977494de71ff82fec3d02420ecb80a30b9 pid=6024 runtime=io.containerd.runc.v2
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785293775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785363542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785372748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:15 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:15.785536470Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1650473a18ef5642e63da9873326d2ed8d331ce75d182aaf5834afe35d8f1c48 pid=6217 runtime=io.containerd.runc.v2
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098886881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098975354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.098986289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:16 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:16.099142849Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/152338a53f1e4e1033c391833e8d6cba34a8c41caa549b9524e155354c7edd68 pid=6265 runtime=io.containerd.runc.v2
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192601808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192670528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192679056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.192948353Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c41fc7d463dbce833eb22fe2cbe7272c863767af9f5ce4eb37b36c8efa33b012 pid=6532 runtime=io.containerd.runc.v2
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493268572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493331709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493341289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Sep 21 22:27:27 pause-20220921152522-3535 dockerd[3700]: time="2022-09-21T22:27:27.493781950Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e6a3aeef0ff7cec28ea93bae81a53252f4adbfe81f9da2e64add46df53fa77f2 pid=6573 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
e6a3aeef0ff7c 6e38f40d628db 6 seconds ago Running storage-provisioner 0 c41fc7d463dbc
152338a53f1e4 1c7d8c51823b5 17 seconds ago Running kube-proxy 3 f67bd5c5d43e1
1650473a18ef5 5185b96f0becf 18 seconds ago Running coredns 2 92cc25df1c118
64651e97bf148 a8a176a5d5d69 26 seconds ago Running etcd 3 0249ca0da9611
207eee071672f ca0ea1ee3cfd3 26 seconds ago Running kube-scheduler 3 522a493620409
534b0d7cd88d7 dbfceb93c69b6 26 seconds ago Running kube-controller-manager 3 f60c5ce6318fc
b6d4531497f33 97801f8394908 31 seconds ago Running kube-apiserver 3 0ca250926532e
d7cbc4c453b05 ca0ea1ee3cfd3 42 seconds ago Exited kube-scheduler 2 1a3e01fca5715
823942ffecb6f dbfceb93c69b6 45 seconds ago Exited kube-controller-manager 2 e1129956136e0
283fac289f860 a8a176a5d5d69 46 seconds ago Exited etcd 2 eb1318ed7bcc9
c2e8fe8419a96 1c7d8c51823b5 47 seconds ago Exited kube-proxy 2 994dd806c8bfd
4934b6e15931f 5185b96f0becf About a minute ago Exited coredns 1 163c82f50ebf1
3a4741e1fe3c0 97801f8394908 About a minute ago Exited kube-apiserver 2 3d0143698c2dc
*
* ==> coredns [1650473a18ef] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [4934b6e15931] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/ready: Still waiting on: "kubernetes"
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
*
* ==> describe nodes <==
* Name: pause-20220921152522-3535
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220921152522-3535
kubernetes.io/os=linux
minikube.k8s.io/commit=937c68716dfaac5b5ffa3b6655158d5d3472b8c4
minikube.k8s.io/name=pause-20220921152522-3535
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_09_21T15_25_59_0700
minikube.k8s.io/version=v1.27.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 21 Sep 2022 22:25:58 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220921152522-3535
AcquireTime: <unset>
RenewTime: Wed, 21 Sep 2022 22:27:24 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:25:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 21 Sep 2022 22:27:14 +0000 Wed, 21 Sep 2022 22:26:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.28
Hostname: pause-20220921152522-3535
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 0962272db386446fb19d5815e48c70e2
System UUID: 485511ed-0000-0000-82c9-149d997fca88
Boot ID: e52786ed-2040-47a8-9190-c9c808b4a98b
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.18
Kubelet Version: v1.25.2
Kube-Proxy Version: v1.25.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-9wtnp 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 83s
kube-system etcd-pause-20220921152522-3535 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 95s
kube-system kube-apiserver-pause-20220921152522-3535 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 95s
kube-system kube-controller-manager-pause-20220921152522-3535 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 95s
kube-system kube-proxy-5c7jc 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 83s
kube-system kube-scheduler-pause-20220921152522-3535 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 95s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 81s kube-proxy
Normal Starting 17s kube-proxy
Normal Starting 67s kube-proxy
Normal NodeHasSufficientPID 109s (x5 over 109s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 109s (x6 over 109s) kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 109s (x6 over 109s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal Starting 95s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 95s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 95s kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 95s kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 95s kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal NodeReady 85s kubelet Node pause-20220921152522-3535 status is now: NodeReady
Normal RegisteredNode 83s node-controller Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller
Normal Starting 28s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 28s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 27s (x8 over 28s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 27s (x8 over 28s) kubelet Node pause-20220921152522-3535 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 27s (x7 over 28s) kubelet Node pause-20220921152522-3535 status is now: NodeHasSufficientPID
Normal RegisteredNode 7s node-controller Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.836758] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.731337] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +0.090984] systemd-fstab-generator[541]: Ignoring "noauto" for root device
[ +5.027202] systemd-fstab-generator[762]: Ignoring "noauto" for root device
[ +1.197234] kauditd_printk_skb: 16 callbacks suppressed
[ +0.214769] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.091300] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.097321] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.296604] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
[ +0.087737] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
[ +3.910315] systemd-fstab-generator[1322]: Ignoring "noauto" for root device
[ +0.546371] kauditd_printk_skb: 68 callbacks suppressed
[ +13.692006] systemd-fstab-generator[1995]: Ignoring "noauto" for root device
[Sep21 22:26] kauditd_printk_skb: 8 callbacks suppressed
[ +8.344097] systemd-fstab-generator[2768]: Ignoring "noauto" for root device
[ +0.136976] systemd-fstab-generator[2779]: Ignoring "noauto" for root device
[ +0.134278] systemd-fstab-generator[2790]: Ignoring "noauto" for root device
[ +0.497533] kauditd_printk_skb: 17 callbacks suppressed
[ +7.690771] systemd-fstab-generator[4167]: Ignoring "noauto" for root device
[ +0.127432] systemd-fstab-generator[4182]: Ignoring "noauto" for root device
[ +31.144308] kauditd_printk_skb: 34 callbacks suppressed
[Sep21 22:27] systemd-fstab-generator[5830]: Ignoring "noauto" for root device
*
* ==> etcd [283fac289f86] <==
* {"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:26:47.976Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:26:49.366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 is starting a new election at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became pre-candidate at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgPreVoteResp from d3378a43e4252963 at term 3"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became candidate at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgVoteResp from d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became leader at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3378a43e4252963 elected leader d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3378a43e4252963","local-member-attributes":"{Name:pause-20220921152522-3535 ClientURLs:[https://192.168.64.28:2379]}","request-path":"/0/members/d3378a43e4252963/attributes","cluster-id":"e703c3abd1a7846","publish-timeout":"7s"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:26:49.367Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:26:49.368Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-09-21T22:26:49.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.28:2379"}
{"level":"info","ts":"2022-09-21T22:26:49.375Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-09-21T22:26:49.376Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-09-21T22:27:00.388Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-09-21T22:27:00.388Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220921152522-3535","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"]}
WARNING: 2022/09/21 22:27:00 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/09/21 22:27:00 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.28:2379 192.168.64.28:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.28:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-09-21T22:27:00.391Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d3378a43e4252963","current-leader-member-id":"d3378a43e4252963"}
{"level":"info","ts":"2022-09-21T22:27:00.392Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:00.394Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:00.394Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220921152522-3535","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"]}
*
* ==> etcd [64651e97bf14] <==
* {"level":"info","ts":"2022-09-21T22:27:08.280Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d3378a43e4252963","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-09-21T22:27:08.282Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3378a43e4252963","initial-advertise-peer-urls":["https://192.168.64.28:2380"],"listen-peer-urls":["https://192.168.64.28:2380"],"advertise-client-urls":["https://192.168.64.28:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.28:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-09-21T22:27:08.282Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 switched to configuration voters=(15219785489916963171)"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e703c3abd1a7846","local-member-id":"d3378a43e4252963","added-peer-id":"d3378a43e4252963","added-peer-peer-urls":["https://192.168.64.28:2380"]}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e703c3abd1a7846","local-member-id":"d3378a43e4252963","cluster-version":"3.5"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:08.285Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.28:2380"}
{"level":"info","ts":"2022-09-21T22:27:08.283Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 is starting a new election at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became pre-candidate at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgPreVoteResp from d3378a43e4252963 at term 4"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became candidate at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 received MsgVoteResp from d3378a43e4252963 at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3378a43e4252963 became leader at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3378a43e4252963 elected leader d3378a43e4252963 at term 5"}
{"level":"info","ts":"2022-09-21T22:27:09.547Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3378a43e4252963","local-member-attributes":"{Name:pause-20220921152522-3535 ClientURLs:[https://192.168.64.28:2379]}","request-path":"/0/members/d3378a43e4252963/attributes","cluster-id":"e703c3abd1a7846","publish-timeout":"7s"}
{"level":"info","ts":"2022-09-21T22:27:09.548Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:27:09.548Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.28:2379"}
{"level":"info","ts":"2022-09-21T22:27:09.549Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-09-21T22:27:09.549Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-09-21T22:27:09.550Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-09-21T22:27:09.550Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
*
* ==> kernel <==
* 22:27:34 up 2 min, 0 users, load average: 0.36, 0.20, 0.08
Linux pause-20220921152522-3535 5.10.57 #1 SMP Sat Sep 10 02:24:46 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [3a4741e1fe3c] <==
* W0921 22:26:42.249889 1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0921 22:26:42.252491 1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0921 22:26:47.844900 1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
E0921 22:26:51.410448 1 run.go:74] "command failed" err="context deadline exceeded"
*
* ==> kube-apiserver [b6d4531497f3] <==
* I0921 22:27:14.062878 1 controller.go:85] Starting OpenAPI controller
I0921 22:27:14.063014 1 controller.go:85] Starting OpenAPI V3 controller
I0921 22:27:14.063120 1 naming_controller.go:291] Starting NamingConditionController
I0921 22:27:14.063157 1 establishing_controller.go:76] Starting EstablishingController
I0921 22:27:14.063169 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0921 22:27:14.063271 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0921 22:27:14.063303 1 crd_finalizer.go:266] Starting CRDFinalizer
I0921 22:27:14.071305 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0921 22:27:14.072396 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0921 22:27:14.156918 1 cache.go:39] Caches are synced for autoregister controller
I0921 22:27:14.157381 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0921 22:27:14.159134 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0921 22:27:14.160295 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0921 22:27:14.162748 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0921 22:27:14.164291 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0921 22:27:14.214291 1 shared_informer.go:262] Caches are synced for node_authorizer
I0921 22:27:14.252859 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0921 22:27:14.849364 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0921 22:27:15.061773 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0921 22:27:15.487959 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0921 22:27:15.496083 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0921 22:27:15.512729 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0921 22:27:15.525104 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0921 22:27:15.528873 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0921 22:27:26.810346 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [534b0d7cd88d] <==
* I0921 22:27:27.091965 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0921 22:27:27.092105 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-20220921152522-3535. Assuming now as a timestamp.
I0921 22:27:27.092144 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0921 22:27:27.092272 1 event.go:294] "Event occurred" object="pause-20220921152522-3535" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220921152522-3535 event: Registered Node pause-20220921152522-3535 in Controller"
I0921 22:27:27.110604 1 shared_informer.go:262] Caches are synced for TTL
I0921 22:27:27.111981 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0921 22:27:27.112202 1 shared_informer.go:262] Caches are synced for HPA
I0921 22:27:27.112592 1 shared_informer.go:262] Caches are synced for TTL after finished
I0921 22:27:27.115223 1 shared_informer.go:262] Caches are synced for namespace
I0921 22:27:27.118788 1 shared_informer.go:262] Caches are synced for job
I0921 22:27:27.122949 1 shared_informer.go:262] Caches are synced for cronjob
I0921 22:27:27.126944 1 shared_informer.go:262] Caches are synced for endpoint
I0921 22:27:27.160485 1 shared_informer.go:262] Caches are synced for expand
I0921 22:27:27.173668 1 shared_informer.go:262] Caches are synced for persistent volume
I0921 22:27:27.175944 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0921 22:27:27.203878 1 shared_informer.go:262] Caches are synced for attach detach
I0921 22:27:27.211345 1 shared_informer.go:262] Caches are synced for PV protection
I0921 22:27:27.216091 1 shared_informer.go:262] Caches are synced for resource quota
I0921 22:27:27.220621 1 shared_informer.go:262] Caches are synced for stateful set
I0921 22:27:27.261055 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0921 22:27:27.269364 1 shared_informer.go:262] Caches are synced for resource quota
I0921 22:27:27.311010 1 shared_informer.go:262] Caches are synced for daemon sets
I0921 22:27:27.654916 1 shared_informer.go:262] Caches are synced for garbage collector
I0921 22:27:27.686746 1 shared_informer.go:262] Caches are synced for garbage collector
I0921 22:27:27.686841 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [823942ffecb6] <==
* I0921 22:26:49.430074 1 serving.go:348] Generated self-signed cert in-memory
I0921 22:26:50.068771 1 controllermanager.go:178] Version: v1.25.2
I0921 22:26:50.068811 1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:26:50.069610 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0921 22:26:50.069706 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0921 22:26:50.069775 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0921 22:26:50.070146 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
*
* ==> kube-proxy [152338a53f1e] <==
* I0921 22:27:16.200105 1 node.go:163] Successfully retrieved node IP: 192.168.64.28
I0921 22:27:16.200255 1 server_others.go:138] "Detected node IP" address="192.168.64.28"
I0921 22:27:16.200284 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0921 22:27:16.220796 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0921 22:27:16.220810 1 server_others.go:206] "Using iptables Proxier"
I0921 22:27:16.220829 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0921 22:27:16.221038 1 server.go:661] "Version info" version="v1.25.2"
I0921 22:27:16.221047 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:27:16.221421 1 config.go:317] "Starting service config controller"
I0921 22:27:16.221427 1 shared_informer.go:255] Waiting for caches to sync for service config
I0921 22:27:16.221438 1 config.go:226] "Starting endpoint slice config controller"
I0921 22:27:16.221440 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0921 22:27:16.221790 1 config.go:444] "Starting node config controller"
I0921 22:27:16.221831 1 shared_informer.go:255] Waiting for caches to sync for node config
I0921 22:27:16.321553 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0921 22:27:16.321868 1 shared_informer.go:262] Caches are synced for service config
I0921 22:27:16.322427 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [c2e8fe8419a9] <==
* E0921 22:26:52.417919 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.64.28:45762->192.168.64.28:8443: read: connection reset by peer
E0921 22:26:53.525473 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:55.541635 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.072196 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220921152522-3535": dial tcp 192.168.64.28:8443: connect: connection refused
*
* ==> kube-scheduler [207eee071672] <==
* I0921 22:27:07.942128 1 serving.go:348] Generated self-signed cert in-memory
W0921 22:27:14.136528 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0921 22:27:14.136587 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0921 22:27:14.136596 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0921 22:27:14.136622 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0921 22:27:14.160522 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.2"
I0921 22:27:14.160612 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 22:27:14.161435 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0921 22:27:14.161580 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0921 22:27:14.163051 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 22:27:14.161599 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0921 22:27:14.263724 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [d7cbc4c453b0] <==
* W0921 22:26:56.662066 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.64.28:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.662326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.28:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.676873 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.677417 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.727262 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.64.28:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.727389 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.28:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:56.792874 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.28:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:56.792933 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.28:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:57.019135 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:57.019287 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:57.111170 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.64.28:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:57.111256 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.64.28:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:59.563534 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.64.28:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:59.563559 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.28:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:26:59.965353 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:26:59.965379 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.28:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:27:00.044825 1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.64.28:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.044871 1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.28:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
W0921 22:27:00.384285 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.384326 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.28:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.28:8443: connect: connection refused
E0921 22:27:00.398528 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 22:27:00.398546 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0921 22:27:00.398572 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0921 22:27:00.398622 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0921 22:27:00.398861 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Wed 2022-09-21 22:25:29 UTC, ends at Wed 2022-09-21 22:27:35 UTC. --
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.739144 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.839713 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:13 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:13.940319 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: E0921 22:27:14.040786 5836 kubelet.go:2448] "Error getting node" err="node \"pause-20220921152522-3535\" not found"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.141509 5836 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.142001 5836 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.235105 5836 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220921152522-3535"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.235257 5836 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220921152522-3535"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.845723 5836 apiserver.go:52] "Watching apiserver"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.847588 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.847682 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951602 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-kube-proxy\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951731 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-lib-modules\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951776 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-xtables-lock\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951850 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb8f3bae-6107-4a2b-ba32-d79405830bf0-config-volume\") pod \"coredns-565d847f94-9wtnp\" (UID: \"eb8f3bae-6107-4a2b-ba32-d79405830bf0\") " pod="kube-system/coredns-565d847f94-9wtnp"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951882 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2kwd\" (UniqueName: \"kubernetes.io/projected/eb8f3bae-6107-4a2b-ba32-d79405830bf0-kube-api-access-p2kwd\") pod \"coredns-565d847f94-9wtnp\" (UID: \"eb8f3bae-6107-4a2b-ba32-d79405830bf0\") " pod="kube-system/coredns-565d847f94-9wtnp"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951915 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh2rf\" (UniqueName: \"kubernetes.io/projected/1c5b06ea-f4c2-45b9-a80e-d85983bb3282-kube-api-access-zh2rf\") pod \"kube-proxy-5c7jc\" (UID: \"1c5b06ea-f4c2-45b9-a80e-d85983bb3282\") " pod="kube-system/kube-proxy-5c7jc"
Sep 21 22:27:14 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:14.951971 5836 reconciler.go:169] "Reconciler: start to sync state"
Sep 21 22:27:15 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:15.748097 5836 scope.go:115] "RemoveContainer" containerID="4934b6e15931f96c8cd7409c9d9d107463001d3dbbe402bc7ecacd045cfdf26e"
Sep 21 22:27:16 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:16.049291 5836 scope.go:115] "RemoveContainer" containerID="c2e8fe8419a96380dd14dec68931ed3399dbf26a6ff33aace75ae52a339d8568"
Sep 21 22:27:23 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:23.685529 5836 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.821517 5836 topology_manager.go:205] "Topology Admit Handler"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.979546 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f71f00f0-f421-45c2-bfe4-c1e99f11b8e5-tmp\") pod \"storage-provisioner\" (UID: \"f71f00f0-f421-45c2-bfe4-c1e99f11b8e5\") " pod="kube-system/storage-provisioner"
Sep 21 22:27:26 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:26.979717 5836 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2k8\" (UniqueName: \"kubernetes.io/projected/f71f00f0-f421-45c2-bfe4-c1e99f11b8e5-kube-api-access-tv2k8\") pod \"storage-provisioner\" (UID: \"f71f00f0-f421-45c2-bfe4-c1e99f11b8e5\") " pod="kube-system/storage-provisioner"
Sep 21 22:27:27 pause-20220921152522-3535 kubelet[5836]: I0921 22:27:27.456744 5836 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c41fc7d463dbce833eb22fe2cbe7272c863767af9f5ce4eb37b36c8efa33b012"
*
* ==> storage-provisioner [e6a3aeef0ff7] <==
* I0921 22:27:27.575776 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0921 22:27:27.585007 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0921 22:27:27.585247 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0921 22:27:27.589937 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0921 22:27:27.590215 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1!
I0921 22:27:27.591354 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cea77369-71af-4aec-8a4d-59cc48396b09", APIVersion:"v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1 became leader
I0921 22:27:27.690985 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220921152522-3535_c99c674d-e74f-4876-b9bc-cca2318207c1!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220921152522-3535 -n pause-20220921152522-3535
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220921152522-3535 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220921152522-3535 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220921152522-3535 describe pod : exit status 1 (37.073474ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220921152522-3535 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (79.73s)