=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-darwin-amd64 start -p pause-030526 --alsologtostderr -v=1 --driver=hyperkit
E0114 03:06:43.518124 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/ingress-addon-legacy-021426/client.crt: no such file or directory
E0114 03:07:04.703933 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.709189 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.719626 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.739791 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.781007 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:04.861161 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.021767 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.342918 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:05.984512 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:07.265099 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
E0114 03:07:09.825620 2917 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/auto-025507/client.crt: no such file or directory
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-030526 --alsologtostderr -v=1 --driver=hyperkit : (58.07772579s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-030526] minikube v1.28.0 on Darwin 13.0.1
- MINIKUBE_LOCATION=15642
- KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
* Using the hyperkit driver based on existing profile
* Starting control plane node pause-030526 in cluster pause-030526
* Updating the running hyperkit "pause-030526" VM ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0114 03:06:28.687385 9157 out.go:296] Setting OutFile to fd 1 ...
I0114 03:06:28.687591 9157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:06:28.687599 9157 out.go:309] Setting ErrFile to fd 2...
I0114 03:06:28.687603 9157 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:06:28.687745 9157 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
I0114 03:06:28.688255 9157 out.go:303] Setting JSON to false
I0114 03:06:28.710759 9157 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":3961,"bootTime":1673690427,"procs":419,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0114 03:06:28.710861 9157 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0114 03:06:28.734596 9157 out.go:177] * [pause-030526] minikube v1.28.0 on Darwin 13.0.1
I0114 03:06:28.776842 9157 notify.go:220] Checking for updates...
I0114 03:06:28.798601 9157 out.go:177] - MINIKUBE_LOCATION=15642
I0114 03:06:28.840798 9157 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:06:28.861619 9157 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0114 03:06:28.882772 9157 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 03:06:28.924484 9157 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
I0114 03:06:28.946117 9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:06:28.946536 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:06:28.946565 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:06:28.954173 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52749
I0114 03:06:28.954556 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:06:28.955008 9157 main.go:134] libmachine: Using API Version 1
I0114 03:06:28.955021 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:06:28.955307 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:06:28.955441 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:28.955618 9157 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 03:06:28.955912 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:06:28.955936 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:06:28.963290 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52751
I0114 03:06:28.963705 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:06:28.964099 9157 main.go:134] libmachine: Using API Version 1
I0114 03:06:28.964121 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:06:28.964335 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:06:28.964443 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:28.992792 9157 out.go:177] * Using the hyperkit driver based on existing profile
I0114 03:06:29.034697 9157 start.go:294] selected driver: hyperkit
I0114 03:06:29.034712 9157 start.go:838] validating driver "hyperkit" against &{Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 03:06:29.034854 9157 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 03:06:29.034912 9157 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:06:29.035024 9157 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0114 03:06:29.042466 9157 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0114 03:06:29.046033 9157 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:06:29.046055 9157 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0114 03:06:29.049158 9157 cni.go:95] Creating CNI manager for ""
I0114 03:06:29.049177 9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 03:06:29.049194 9157 start_flags.go:319] config:
{Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 03:06:29.049375 9157 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:06:29.091586 9157 out.go:177] * Starting control plane node pause-030526 in cluster pause-030526
I0114 03:06:29.112827 9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 03:06:29.112917 9157 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0114 03:06:29.112936 9157 cache.go:57] Caching tarball of preloaded images
I0114 03:06:29.113063 9157 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0114 03:06:29.113081 9157 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0114 03:06:29.113170 9157 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/config.json ...
I0114 03:06:29.113636 9157 cache.go:193] Successfully downloaded all kic artifacts
I0114 03:06:29.113667 9157 start.go:364] acquiring machines lock for pause-030526: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0114 03:06:29.113733 9157 start.go:368] acquired machines lock for "pause-030526" in 45.637µs
I0114 03:06:29.113755 9157 start.go:96] Skipping create...Using existing machine configuration
I0114 03:06:29.113766 9157 fix.go:55] fixHost starting:
I0114 03:06:29.114009 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:06:29.114025 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:06:29.121486 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52753
I0114 03:06:29.121864 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:06:29.122354 9157 main.go:134] libmachine: Using API Version 1
I0114 03:06:29.122369 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:06:29.122611 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:06:29.122713 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.122814 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:06:29.122941 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:06:29.123087 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:06:29.124181 9157 fix.go:103] recreateIfNeeded on pause-030526: state=Running err=<nil>
W0114 03:06:29.124197 9157 fix.go:129] unexpected machine state, will restart: <nil>
I0114 03:06:29.166509 9157 out.go:177] * Updating the running hyperkit "pause-030526" VM ...
I0114 03:06:29.187761 9157 machine.go:88] provisioning docker machine ...
I0114 03:06:29.187784 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.187932 9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
I0114 03:06:29.188023 9157 buildroot.go:166] provisioning hostname "pause-030526"
I0114 03:06:29.188033 9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
I0114 03:06:29.188122 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.188210 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.188309 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.188405 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.188490 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.188626 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.188805 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.188818 9157 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-030526 && echo "pause-030526" | sudo tee /etc/hostname
I0114 03:06:29.273398 9157 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-030526
I0114 03:06:29.273418 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.273565 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.273662 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.273742 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.273835 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.273992 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.274116 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.274129 9157 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-030526' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-030526/g' /etc/hosts;
else
echo '127.0.1.1 pause-030526' | sudo tee -a /etc/hosts;
fi
fi
I0114 03:06:29.348965 9157 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0114 03:06:29.348986 9157 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15642-1627/.minikube CaCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15642-1627/.minikube}
I0114 03:06:29.349020 9157 buildroot.go:174] setting up certificates
I0114 03:06:29.349033 9157 provision.go:83] configureAuth start
I0114 03:06:29.349046 9157 main.go:134] libmachine: (pause-030526) Calling .GetMachineName
I0114 03:06:29.349179 9157 main.go:134] libmachine: (pause-030526) Calling .GetIP
I0114 03:06:29.349277 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.349368 9157 provision.go:138] copyHostCerts
I0114 03:06:29.349460 9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem, removing ...
I0114 03:06:29.349470 9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem
I0114 03:06:29.349604 9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/key.pem (1679 bytes)
I0114 03:06:29.349818 9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem, removing ...
I0114 03:06:29.349825 9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem
I0114 03:06:29.349899 9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.pem (1082 bytes)
I0114 03:06:29.350084 9157 exec_runner.go:144] found /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem, removing ...
I0114 03:06:29.350091 9157 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem
I0114 03:06:29.350154 9157 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15642-1627/.minikube/cert.pem (1123 bytes)
I0114 03:06:29.350278 9157 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem org=jenkins.pause-030526 san=[192.168.64.24 192.168.64.24 localhost 127.0.0.1 minikube pause-030526]
I0114 03:06:29.470936 9157 provision.go:172] copyRemoteCerts
I0114 03:06:29.471020 9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0114 03:06:29.471058 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.471229 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.471341 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.471418 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.471504 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:06:29.522818 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0114 03:06:29.539830 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0114 03:06:29.557141 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0114 03:06:29.574682 9157 provision.go:86] duration metric: configureAuth took 225.633354ms
I0114 03:06:29.574695 9157 buildroot.go:189] setting minikube options for container-runtime
I0114 03:06:29.574864 9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:06:29.574903 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.575088 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.575199 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.575309 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.575407 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.575504 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.575647 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.575756 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.575765 9157 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0114 03:06:29.651864 9157 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0114 03:06:29.651886 9157 buildroot.go:70] root file system type: tmpfs
I0114 03:06:29.652052 9157 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0114 03:06:29.652086 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.652231 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.652330 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.652418 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.652523 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.652668 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.652795 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.652844 9157 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0114 03:06:29.737555 9157 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0114 03:06:29.737583 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.737711 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.737816 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.737910 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.737994 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.738154 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.738285 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.738299 9157 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0114 03:06:29.818542 9157 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0114 03:06:29.818555 9157 machine.go:91] provisioned docker machine in 630.785986ms
I0114 03:06:29.818564 9157 start.go:300] post-start starting for "pause-030526" (driver="hyperkit")
I0114 03:06:29.818569 9157 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0114 03:06:29.818585 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.818762 9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0114 03:06:29.818776 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.818863 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.818989 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.819101 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.819212 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:06:29.863589 9157 ssh_runner.go:195] Run: cat /etc/os-release
I0114 03:06:29.867147 9157 info.go:137] Remote host: Buildroot 2021.02.12
I0114 03:06:29.867177 9157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1627/.minikube/addons for local assets ...
I0114 03:06:29.867293 9157 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15642-1627/.minikube/files for local assets ...
I0114 03:06:29.867473 9157 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem -> 29172.pem in /etc/ssl/certs
I0114 03:06:29.867660 9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0114 03:06:29.875288 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem --> /etc/ssl/certs/29172.pem (1708 bytes)
I0114 03:06:29.898311 9157 start.go:303] post-start completed in 79.737622ms
I0114 03:06:29.898335 9157 fix.go:57] fixHost completed within 784.573643ms
I0114 03:06:29.898350 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.898547 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.898686 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.898829 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.899014 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.899206 9157 main.go:134] libmachine: Using SSH client type: native
I0114 03:06:29.899394 9157 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.24 22 <nil> <nil>}
I0114 03:06:29.899432 9157 main.go:134] libmachine: About to run SSH command:
date +%s.%N
I0114 03:06:29.979697 9157 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673694390.143622494
I0114 03:06:29.979710 9157 fix.go:207] guest clock: 1673694390.143622494
I0114 03:06:29.979738 9157 fix.go:220] Guest: 2023-01-14 03:06:30.143622494 -0800 PST Remote: 2023-01-14 03:06:29.898338 -0800 PST m=+1.283437140 (delta=245.284494ms)
I0114 03:06:29.979808 9157 fix.go:191] guest clock delta is within tolerance: 245.284494ms
I0114 03:06:29.979818 9157 start.go:83] releasing machines lock for "pause-030526", held for 866.080602ms
I0114 03:06:29.979845 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.980003 9157 main.go:134] libmachine: (pause-030526) Calling .GetIP
I0114 03:06:29.980117 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.980494 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.980640 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:06:29.980740 9157 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0114 03:06:29.980793 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.980832 9157 ssh_runner.go:195] Run: cat /version.json
I0114 03:06:29.980844 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:06:29.980928 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.981016 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:06:29.981118 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.981152 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:06:29.981256 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.981307 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:06:29.981396 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:06:29.981473 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:06:30.027399 9157 ssh_runner.go:195] Run: systemctl --version
I0114 03:06:30.066000 9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 03:06:30.066156 9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 03:06:30.089163 9157 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0114 03:06:30.089186 9157 docker.go:543] Images already preloaded, skipping extraction
I0114 03:06:30.089274 9157 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0114 03:06:30.100249 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0114 03:06:30.112389 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0114 03:06:30.124623 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0114 03:06:30.139844 9157 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0114 03:06:30.284861 9157 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0114 03:06:30.434310 9157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 03:06:30.566116 9157 ssh_runner.go:195] Run: sudo systemctl restart docker
I0114 03:06:47.671669 9157 ssh_runner.go:235] Completed: sudo systemctl restart docker: (17.105631991s)
I0114 03:06:47.671737 9157 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0114 03:06:47.770672 9157 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0114 03:06:47.869421 9157 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0114 03:06:47.878146 9157 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0114 03:06:47.878278 9157 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0114 03:06:47.881634 9157 start.go:472] Will wait 60s for crictl version
I0114 03:06:47.882340 9157 ssh_runner.go:195] Run: which crictl
I0114 03:06:47.884593 9157 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0114 03:06:47.908327 9157 start.go:488] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.21
RuntimeApiVersion: 1.41.0
I0114 03:06:47.908411 9157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 03:06:47.928147 9157 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0114 03:06:47.969295 9157 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
I0114 03:06:47.969496 9157 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0114 03:06:47.973825 9157 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 03:06:47.973900 9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 03:06:47.989568 9157 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0114 03:06:47.989581 9157 docker.go:543] Images already preloaded, skipping extraction
I0114 03:06:47.989674 9157 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0114 03:06:48.005387 9157 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0114 03:06:48.005408 9157 cache_images.go:84] Images are preloaded, skipping loading
I0114 03:06:48.005489 9157 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0114 03:06:48.027097 9157 cni.go:95] Creating CNI manager for ""
I0114 03:06:48.027111 9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 03:06:48.027129 9157 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0114 03:06:48.027144 9157 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.24 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-030526 NodeName:pause-030526 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.24 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0114 03:06:48.027235 9157 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.24
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-030526"
kubeletExtraArgs:
node-ip: 192.168.64.24
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.24"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0114 03:06:48.027301 9157 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-030526 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.24 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0114 03:06:48.027366 9157 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0114 03:06:48.033500 9157 binaries.go:44] Found k8s binaries, skipping transfer
I0114 03:06:48.033554 9157 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0114 03:06:48.039264 9157 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I0114 03:06:48.050124 9157 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0114 03:06:48.061033 9157 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2037 bytes)
I0114 03:06:48.071797 9157 ssh_runner.go:195] Run: grep 192.168.64.24 control-plane.minikube.internal$ /etc/hosts
I0114 03:06:48.074157 9157 certs.go:54] Setting up /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526 for IP: 192.168.64.24
I0114 03:06:48.074259 9157 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.key
I0114 03:06:48.074312 9157 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.key
I0114 03:06:48.074399 9157 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key
I0114 03:06:48.074458 9157 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.key.098da7d7
I0114 03:06:48.074508 9157 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.key
I0114 03:06:48.074730 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917.pem (1338 bytes)
W0114 03:06:48.074771 9157 certs.go:384] ignoring /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917_empty.pem, impossibly tiny 0 bytes
I0114 03:06:48.074783 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca-key.pem (1675 bytes)
I0114 03:06:48.074816 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem (1082 bytes)
I0114 03:06:48.074856 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem (1123 bytes)
I0114 03:06:48.074893 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/certs/key.pem (1679 bytes)
I0114 03:06:48.074969 9157 certs.go:388] found cert: /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem (1708 bytes)
I0114 03:06:48.075495 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0114 03:06:48.091370 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0114 03:06:48.107324 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0114 03:06:48.125396 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0114 03:06:48.142367 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0114 03:06:48.158556 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0114 03:06:48.174530 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0114 03:06:48.190969 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0114 03:06:48.206877 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0114 03:06:48.222733 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/2917.pem --> /usr/share/ca-certificates/2917.pem (1338 bytes)
I0114 03:06:48.238623 9157 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15642-1627/.minikube/files/etc/ssl/certs/29172.pem --> /usr/share/ca-certificates/29172.pem (1708 bytes)
I0114 03:06:48.254642 9157 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0114 03:06:48.265754 9157 ssh_runner.go:195] Run: openssl version
I0114 03:06:48.269279 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0114 03:06:48.275922 9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0114 03:06:48.278902 9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:06 /usr/share/ca-certificates/minikubeCA.pem
I0114 03:06:48.278944 9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0114 03:06:48.282442 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0114 03:06:48.288235 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2917.pem && ln -fs /usr/share/ca-certificates/2917.pem /etc/ssl/certs/2917.pem"
I0114 03:06:48.294874 9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2917.pem
I0114 03:06:48.297795 9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:10 /usr/share/ca-certificates/2917.pem
I0114 03:06:48.297840 9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2917.pem
I0114 03:06:48.301620 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2917.pem /etc/ssl/certs/51391683.0"
I0114 03:06:48.307666 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29172.pem && ln -fs /usr/share/ca-certificates/29172.pem /etc/ssl/certs/29172.pem"
I0114 03:06:48.314460 9157 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29172.pem
I0114 03:06:48.317366 9157 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:10 /usr/share/ca-certificates/29172.pem
I0114 03:06:48.317407 9157 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29172.pem
I0114 03:06:48.321013 9157 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29172.pem /etc/ssl/certs/3ec20f2e.0"
I0114 03:06:48.326927 9157 kubeadm.go:396] StartCluster: {Name:pause-030526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.25.3 ClusterName:pause-030526 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 03:06:48.327060 9157 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0114 03:06:48.342621 9157 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0114 03:06:48.348703 9157 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0114 03:06:48.348718 9157 kubeadm.go:627] restartCluster start
I0114 03:06:48.348768 9157 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0114 03:06:48.354403 9157 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0114 03:06:48.354852 9157 kubeconfig.go:92] found "pause-030526" server: "https://192.168.64.24:8443"
I0114 03:06:48.355521 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:06:48.356029 9157 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0114 03:06:48.361588 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:48.361636 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:48.368927 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:48.569934 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:48.570084 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:48.579739 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:48.769235 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:48.769401 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:48.779223 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:48.969141 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:48.969273 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:48.979305 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:49.170586 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:49.170746 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:49.180961 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:49.369472 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:49.369620 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:49.380092 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:49.569221 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:49.569376 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:49.580057 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:49.769559 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:49.769720 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:49.780160 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:49.969713 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:49.969880 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:49.980690 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:50.169158 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:50.169332 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:50.179890 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:50.368999 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:50.369102 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:50.377659 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:50.569384 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:50.569518 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:50.582430 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:50.770880 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:50.770950 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:50.788272 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:50.969676 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:50.969758 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:50.989047 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:51.169043 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:51.169127 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:51.194165 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:51.369417 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:51.369597 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:51.398452 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:51.398462 9157 api_server.go:165] Checking apiserver status ...
I0114 03:06:51.398521 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0114 03:06:51.420959 9157 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0114 03:06:51.420992 9157 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I0114 03:06:51.421002 9157 kubeadm.go:1114] stopping kube-system containers ...
I0114 03:06:51.421125 9157 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0114 03:06:51.477694 9157 docker.go:444] Stopping containers: [a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588]
I0114 03:06:51.477845 9157 ssh_runner.go:195] Run: docker stop a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588
I0114 03:07:02.004256 9157 ssh_runner.go:235] Completed: docker stop a91b8dbf52b2 4ef492042630 1f0472740d8e 8cfdb196b142 ec5b05843edc d1df9d20a995 be1781a847e8 a1988593cada 5d6ae273017b c7561d6051ce 9307465ae584 76689e83a514 848614b4aa6d 8a4cb12efc1e 3e92db3e0bfe 3273458c29fa acf450dad9b0 50be22d755aa 96711f56f8f4 e97d9bd01218 fcb88d3eda1c 92a6ee018993 6e5477b52047 918f5acfd267 e3abad0f6e65 419abd92be6a 36bcd90d5bf6 45b468820501 64c356ff458b 61d5e518bf2c 1fa84458fdde 47fa423a8588: (10.526450982s)
I0114 03:07:02.004319 9157 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0114 03:07:02.058192 9157 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0114 03:07:02.066713 9157 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jan 14 11:05 /etc/kubernetes/admin.conf
-rw------- 1 root root 5653 Jan 14 11:05 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Jan 14 11:06 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jan 14 11:05 /etc/kubernetes/scheduler.conf
I0114 03:07:02.066791 9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0114 03:07:02.072780 9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0114 03:07:02.085110 9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0114 03:07:02.093260 9157 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0114 03:07:02.093319 9157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0114 03:07:02.103465 9157 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0114 03:07:02.111399 9157 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0114 03:07:02.111459 9157 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0114 03:07:02.118872 9157 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0114 03:07:02.124956 9157 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0114 03:07:02.124967 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:02.170978 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:03.258681 9157 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.087691389s)
I0114 03:07:03.258712 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:03.416131 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:03.469041 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:03.530234 9157 api_server.go:51] waiting for apiserver process to appear ...
I0114 03:07:03.530304 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:04.044210 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:04.544110 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:05.042833 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:05.055120 9157 api_server.go:71] duration metric: took 1.524893992s to wait for apiserver process to appear ...
I0114 03:07:05.055140 9157 api_server.go:87] waiting for apiserver healthz status ...
I0114 03:07:05.055159 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:07.680217 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0114 03:07:07.680234 9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0114 03:07:08.181057 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:08.185347 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0114 03:07:08.185807 9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0114 03:07:08.680885 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:08.687817 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0114 03:07:08.710794 9157 api_server.go:102] status: https://192.168.64.24:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0114 03:07:09.180636 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:09.185298 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
ok
I0114 03:07:09.191228 9157 api_server.go:140] control plane version: v1.25.3
I0114 03:07:09.191238 9157 api_server.go:130] duration metric: took 4.136116498s to wait for apiserver health ...
I0114 03:07:09.191247 9157 cni.go:95] Creating CNI manager for ""
I0114 03:07:09.191255 9157 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 03:07:09.191280 9157 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 03:07:09.196622 9157 system_pods.go:59] 6 kube-system pods found
I0114 03:07:09.196635 9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:09.196641 9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0114 03:07:09.196646 9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0114 03:07:09.196652 9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0114 03:07:09.196657 9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0114 03:07:09.196662 9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0114 03:07:09.196666 9157 system_pods.go:74] duration metric: took 5.37845ms to wait for pod list to return data ...
I0114 03:07:09.196671 9157 node_conditions.go:102] verifying NodePressure condition ...
I0114 03:07:09.198952 9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 03:07:09.198970 9157 node_conditions.go:123] node cpu capacity is 2
I0114 03:07:09.198980 9157 node_conditions.go:105] duration metric: took 2.305086ms to run NodePressure ...
I0114 03:07:09.198991 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0114 03:07:09.323365 9157 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0114 03:07:09.326471 9157 kubeadm.go:778] kubelet initialised
I0114 03:07:09.326482 9157 kubeadm.go:779] duration metric: took 3.103727ms waiting for restarted kubelet to initialise ...
I0114 03:07:09.326491 9157 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:09.330104 9157 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:09.333713 9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:09.333722 9157 pod_ready.go:81] duration metric: took 3.606539ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:09.333728 9157 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:11.340950 9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:13.841765 9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:15.843717 9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:18.344136 9157 pod_ready.go:102] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:19.342684 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:19.342697 9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:19.342705 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:21.351765 9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:23.350513 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.350547 9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.350554 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353476 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.353485 9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353490 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356134 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.356142 9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356148 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358793 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.358800 9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358804 9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.358813 9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 03:07:23.366176 9157 ops.go:34] apiserver oom_adj: -16
I0114 03:07:23.366186 9157 kubeadm.go:631] restartCluster took 35.017662843s
I0114 03:07:23.366207 9157 kubeadm.go:398] StartCluster complete in 35.039471935s
I0114 03:07:23.366217 9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.366305 9157 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:07:23.366836 9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.367658 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.369507 9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
I0114 03:07:23.369535 9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 03:07:23.369542 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 03:07:23.369576 9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0114 03:07:23.369692 9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:07:23.390480 9157 out.go:177] * Verifying Kubernetes components...
I0114 03:07:23.390629 9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
I0114 03:07:23.433350 9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
I0114 03:07:23.390632 9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
W0114 03:07:23.433358 9157 addons.go:236] addon storage-provisioner should already be in state true
I0114 03:07:23.433392 9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
I0114 03:07:23.433406 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:23.430373 9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0114 03:07:23.433421 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.433815 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433877 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.433873 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433900 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.442841 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
I0114 03:07:23.443203 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
I0114 03:07:23.443537 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443728 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443899 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.443908 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444057 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.444066 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444119 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444329 9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
I0114 03:07:23.444380 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444587 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.444602 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.444609 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.444705 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.445301 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.447147 9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
I0114 03:07:23.447164 9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
I0114 03:07:23.447169 9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.447225 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.450515 9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
W0114 03:07:23.450531 9157 addons.go:236] addon default-storageclass should already be in state true
I0114 03:07:23.450551 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.450887 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.450912 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.453524 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
I0114 03:07:23.454275 9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.454289 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.454742 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.454758 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.455002 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.455108 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.455188 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.455261 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.456200 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.459195 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
I0114 03:07:23.477120 9157 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 03:07:23.477524 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.498347 9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.498358 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 03:07:23.498372 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.498499 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.498595 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.498695 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.498707 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.498780 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.498953 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.499031 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.499602 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.499665 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.508249 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
I0114 03:07:23.508606 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.509066 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.509081 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.509378 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.509472 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.509563 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.509636 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.510952 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.511144 9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:23.511152 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 03:07:23.511161 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.511250 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.511331 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.511433 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.511524 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.553319 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.563588 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:23.749533 9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.749544 9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.749553 9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149706 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.149731 9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149737 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.158190 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158207 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158210 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158221 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158392 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158444 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158456 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158458 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158461 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158483 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158469 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158502 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158508 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158527 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158704 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158710 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158718 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158730 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158738 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158735 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158751 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158759 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158908 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.159011 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.159025 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.179920 9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0114 03:07:24.200426 9157 addons.go:488] enableAddons completed in 830.850832ms
I0114 03:07:24.550392 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.550424 9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.550431 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949214 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.949226 9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949237 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350138 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.350151 9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350162 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749166 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.749177 9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749184 9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:25.749196 9157 api_server.go:51] waiting for apiserver process to appear ...
I0114 03:07:25.749260 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:25.765950 9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
I0114 03:07:25.765970 9157 api_server.go:87] waiting for apiserver healthz status ...
I0114 03:07:25.765977 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:25.772427 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
ok
I0114 03:07:25.772956 9157 api_server.go:140] control plane version: v1.25.3
I0114 03:07:25.772967 9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
I0114 03:07:25.772974 9157 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 03:07:25.950643 9157 system_pods.go:59] 7 kube-system pods found
I0114 03:07:25.950657 9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:25.950661 9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:25.950665 9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:25.950678 9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:25.950683 9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:25.950690 9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:25.950696 9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:25.950700 9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
I0114 03:07:25.950706 9157 default_sa.go:34] waiting for default service account to be created ...
I0114 03:07:26.149504 9157 default_sa.go:45] found service account: "default"
I0114 03:07:26.149520 9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
I0114 03:07:26.149525 9157 system_pods.go:116] waiting for k8s-apps to be running ...
I0114 03:07:26.350967 9157 system_pods.go:86] 7 kube-system pods found
I0114 03:07:26.350980 9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:26.350985 9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:26.350988 9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:26.350992 9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:26.350999 9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:26.351005 9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:26.351011 9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:26.351017 9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
I0114 03:07:26.351034 9157 system_svc.go:44] waiting for kubelet service to be running ....
I0114 03:07:26.351110 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:26.360848 9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
I0114 03:07:26.360864 9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0114 03:07:26.360876 9157 node_conditions.go:102] verifying NodePressure condition ...
I0114 03:07:26.549739 9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 03:07:26.549755 9157 node_conditions.go:123] node cpu capacity is 2
I0114 03:07:26.549762 9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
I0114 03:07:26.549769 9157 start.go:217] waiting for startup goroutines ...
I0114 03:07:26.550105 9157 ssh_runner.go:195] Run: rm -f paused
I0114 03:07:26.590700 9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0114 03:07:26.635229 9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-030526 -n pause-030526
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-030526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-030526 logs -n 25: (2.896771812s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
| stop | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | |
| | --schedule 15s | | | | | |
| stop | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
| | --schedule 15s | | | | | |
| delete | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
| start | -p skaffold-025353 | skaffold-025353 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:54 PST |
| | --memory=2600 | | | | | |
| | --driver=hyperkit | | | | | |
| docker-env | --shell none -p | skaffold-025353 | skaffold | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
| | skaffold-025353 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p skaffold-025353 | skaffold-025353 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 02:55 PST |
| start | -p offline-docker-025507 | offline-docker-025507 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p auto-025507 --memory=2048 | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p auto-025507 pgrep -a | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| | kubelet | | | | | |
| delete | -p offline-docker-025507 | offline-docker-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:03 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p auto-025507 | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| stop | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:03 PST |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:04 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p stopped-upgrade-030226 | stopped-upgrade-030226 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:05 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
| delete | -p stopped-upgrade-030226 | stopped-upgrade-030226 | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
| start | -p pause-030526 --memory=2048 | pause-030526 | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:06 PST |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p running-upgrade-030435 | running-upgrade-030435 | jenkins | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-030526 | pause-030526 | jenkins | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p running-upgrade-030435 | running-upgrade-030435 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | 14 Jan 23 03:07 PST |
| start | -p NoKubernetes-030718 | NoKubernetes-030718 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p NoKubernetes-030718 | NoKubernetes-030718 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | |
| | --driver=hyperkit | | | | | |
|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 03:07:19
Running on machine: MacOS-Agent-1
Binary: Built with gc go1.19.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 03:07:19.019830 9247 out.go:296] Setting OutFile to fd 1 ...
I0114 03:07:19.020100 9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:07:19.020104 9247 out.go:309] Setting ErrFile to fd 2...
I0114 03:07:19.020107 9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:07:19.020220 9247 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
I0114 03:07:19.020722 9247 out.go:303] Setting JSON to false
I0114 03:07:19.039498 9247 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4012,"bootTime":1673690427,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0114 03:07:19.039605 9247 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0114 03:07:19.077571 9247 out.go:177] * [NoKubernetes-030718] minikube v1.28.0 on Darwin 13.0.1
I0114 03:07:19.136784 9247 notify.go:220] Checking for updates...
I0114 03:07:19.174094 9247 out.go:177] - MINIKUBE_LOCATION=15642
I0114 03:07:19.232648 9247 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:07:19.306906 9247 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0114 03:07:19.328006 9247 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 03:07:19.348934 9247 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
I0114 03:07:19.370518 9247 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:07:19.370581 9247 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 03:07:19.398770 9247 out.go:177] * Using the hyperkit driver based on user configuration
I0114 03:07:19.441028 9247 start.go:294] selected driver: hyperkit
I0114 03:07:19.441047 9247 start.go:838] validating driver "hyperkit" against <nil>
I0114 03:07:19.441077 9247 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 03:07:19.441203 9247 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:07:19.441421 9247 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0114 03:07:19.449241 9247 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0114 03:07:19.452425 9247 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:19.452438 9247 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0114 03:07:19.452505 9247 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0114 03:07:19.454697 9247 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
I0114 03:07:19.454829 9247 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
I0114 03:07:19.454850 9247 cni.go:95] Creating CNI manager for ""
I0114 03:07:19.454857 9247 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 03:07:19.454867 9247 start_flags.go:319] config:
{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 03:07:19.454983 9247 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:07:19.497050 9247 out.go:177] * Starting control plane node NoKubernetes-030718 in cluster NoKubernetes-030718
I0114 03:07:19.518880 9247 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 03:07:19.518967 9247 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0114 03:07:19.518991 9247 cache.go:57] Caching tarball of preloaded images
I0114 03:07:19.519216 9247 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0114 03:07:19.519232 9247 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0114 03:07:19.519384 9247 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json ...
I0114 03:07:19.519444 9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json: {Name:mk5caec35ff8fcf3d9c5465ac05bd2e53369341a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:19.519982 9247 cache.go:193] Successfully downloaded all kic artifacts
I0114 03:07:19.520014 9247 start.go:364] acquiring machines lock for NoKubernetes-030718: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0114 03:07:19.520101 9247 start.go:368] acquired machines lock for "NoKubernetes-030718" in 77µs
I0114 03:07:19.520133 9247 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 03:07:19.520191 9247 start.go:125] createHost starting for "" (driver="hyperkit")
I0114 03:07:19.342684 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:19.342697 9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:19.342705 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:21.351765 9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:23.350513 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.350547 9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.350554 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353476 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.353485 9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353490 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356134 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.356142 9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356148 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358793 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.358800 9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358804 9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.358813 9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 03:07:23.366176 9157 ops.go:34] apiserver oom_adj: -16
I0114 03:07:23.366186 9157 kubeadm.go:631] restartCluster took 35.017662843s
I0114 03:07:23.366207 9157 kubeadm.go:398] StartCluster complete in 35.039471935s
I0114 03:07:23.366217 9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.366305 9157 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:07:23.366836 9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.367658 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.369507 9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
I0114 03:07:23.369535 9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 03:07:23.369542 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 03:07:23.369576 9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0114 03:07:23.369692 9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:07:23.390480 9157 out.go:177] * Verifying Kubernetes components...
I0114 03:07:23.390629 9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
I0114 03:07:23.433350 9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
I0114 03:07:23.390632 9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
W0114 03:07:23.433358 9157 addons.go:236] addon storage-provisioner should already be in state true
I0114 03:07:23.433392 9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
I0114 03:07:23.433406 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:23.430373 9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0114 03:07:23.433421 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.433815 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433877 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.433873 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433900 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.442841 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
I0114 03:07:23.443203 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
I0114 03:07:23.443537 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443728 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443899 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.443908 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444057 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.444066 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444119 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444329 9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
I0114 03:07:23.444380 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444587 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.444602 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.444609 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.444705 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.445301 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.447147 9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
I0114 03:07:23.447164 9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
I0114 03:07:23.447169 9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.447225 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.450515 9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
W0114 03:07:23.450531 9157 addons.go:236] addon default-storageclass should already be in state true
I0114 03:07:23.450551 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.450887 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.450912 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.453524 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
I0114 03:07:23.454275 9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.454289 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.454742 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.454758 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.455002 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.455108 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.455188 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.455261 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.456200 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.459195 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
I0114 03:07:23.477120 9157 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 03:07:23.477524 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.498347 9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.498358 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 03:07:23.498372 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.498499 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.498595 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.498695 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.498707 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.498780 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.498953 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.499031 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.499602 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.499665 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.508249 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
I0114 03:07:23.508606 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.509066 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.509081 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.509378 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.509472 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.509563 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.509636 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.510952 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.511144 9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:23.511152 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 03:07:23.511161 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.511250 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.511331 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.511433 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.511524 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.553319 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.563588 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:19.541712 9247 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I0114 03:07:19.542144 9247 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:19.542218 9247 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:19.550680 9247 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52804
I0114 03:07:19.551062 9247 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:19.551455 9247 main.go:134] libmachine: Using API Version 1
I0114 03:07:19.551463 9247 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:19.551681 9247 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:19.551780 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetMachineName
I0114 03:07:19.551847 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .DriverName
I0114 03:07:19.551975 9247 start.go:159] libmachine.API.Create for "NoKubernetes-030718" (driver="hyperkit")
I0114 03:07:19.552002 9247 client.go:168] LocalClient.Create starting
I0114 03:07:19.552038 9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem
I0114 03:07:19.552082 9247 main.go:134] libmachine: Decoding PEM data...
I0114 03:07:19.552098 9247 main.go:134] libmachine: Parsing certificate...
I0114 03:07:19.552156 9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem
I0114 03:07:19.552191 9247 main.go:134] libmachine: Decoding PEM data...
I0114 03:07:19.552202 9247 main.go:134] libmachine: Parsing certificate...
I0114 03:07:19.552213 9247 main.go:134] libmachine: Running pre-create checks...
I0114 03:07:19.552220 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .PreCreateCheck
I0114 03:07:19.552322 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.552519 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetConfigRaw
I0114 03:07:19.552961 9247 main.go:134] libmachine: Creating machine...
I0114 03:07:19.552966 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .Create
I0114 03:07:19.553050 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.553180 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.553037 9257 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15642-1627/.minikube
I0114 03:07:19.553267 9247 main.go:134] libmachine: (NoKubernetes-030718) Downloading /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15642-1627/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso...
I0114 03:07:19.718817 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.718695 9257 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/id_rsa...
I0114 03:07:19.778524 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.778469 9257 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk...
I0114 03:07:19.778533 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing magic tar header
I0114 03:07:19.778627 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing SSH key tar header
I0114 03:07:19.779210 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.779155 9257 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718 ...
I0114 03:07:19.950626 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.950641 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid
I0114 03:07:19.950650 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Using UUID 9cd1b71a-93fb-11ed-97d5-149d997cd0f1
I0114 03:07:19.972204 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Generated MAC aa:b9:cb:46:9b:fa
I0114 03:07:19.972218 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718
I0114 03:07:19.972248 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0114 03:07:19.972285 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0114 03:07:19.972374 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9cd1b71a-93fb-11ed-97d5-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/m
inikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"}
I0114 03:07:19.972409 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9cd1b71a-93fb-11ed-97d5-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring -f kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes
-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"
I0114 03:07:19.972414 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0114 03:07:19.973744 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Pid is 9258
I0114 03:07:19.974155 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 0
I0114 03:07:19.974164 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.974238 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:19.975810 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:19.976175 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:19.976189 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:19.976220 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:19.976231 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:19.976241 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:19.976250 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:19.976259 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:19.976268 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:19.976274 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:19.976280 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:19.976296 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:19.976308 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:19.976317 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:19.976325 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:19.976336 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:19.976344 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:19.976352 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:19.976359 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:19.976380 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:19.976389 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:19.976395 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:19.976400 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:19.976423 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:19.976436 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:19.980329 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0114 03:07:19.989638 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0114 03:07:19.990267 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0114 03:07:19.990287 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0114 03:07:19.990297 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0114 03:07:19.990312 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0114 03:07:20.551946 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0114 03:07:20.551964 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0114 03:07:20.657062 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0114 03:07:20.657088 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0114 03:07:20.657094 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0114 03:07:20.657104 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0114 03:07:20.657933 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0114 03:07:20.657941 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0114 03:07:21.977330 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 1
I0114 03:07:21.977341 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:21.977398 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:21.978146 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:21.978288 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:21.978295 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:21.978302 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:21.978329 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:21.978337 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:21.978342 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:21.978348 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:21.978368 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:21.978374 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:21.978398 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:21.978413 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:21.978423 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:21.978433 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:21.978438 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:21.978445 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:21.978490 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:21.978515 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:21.978525 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:21.978530 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:21.978536 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:21.978543 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:21.978549 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:21.978555 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:21.978562 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:23.979244 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 2
I0114 03:07:23.979260 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.979337 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:23.980113 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:23.980187 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:23.980199 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:23.980208 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:23.980214 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:23.980233 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:23.980241 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:23.980250 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:23.980259 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:23.980265 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:23.980277 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:23.980291 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:23.980298 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:23.980309 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:23.980316 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:23.980322 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:23.980327 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:23.980336 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:23.980347 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:23.980354 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:23.980363 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:23.980369 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:23.980378 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:23.980384 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:23.980391 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:23.749533 9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.749544 9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.749553 9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149706 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.149731 9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149737 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.158190 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158207 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158210 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158221 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158392 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158444 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158456 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158458 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158461 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158483 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158469 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158502 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158508 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158527 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158704 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158710 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158718 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158730 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158738 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158735 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158751 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158759 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158908 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.159011 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.159025 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.179920 9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0114 03:07:24.200426 9157 addons.go:488] enableAddons completed in 830.850832ms
I0114 03:07:24.550392 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.550424 9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.550431 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949214 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.949226 9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949237 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350138 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.350151 9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350162 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749166 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.749177 9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749184 9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:25.749196 9157 api_server.go:51] waiting for apiserver process to appear ...
I0114 03:07:25.749260 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:25.765950 9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
I0114 03:07:25.765970 9157 api_server.go:87] waiting for apiserver healthz status ...
I0114 03:07:25.765977 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:25.772427 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
ok
I0114 03:07:25.772956 9157 api_server.go:140] control plane version: v1.25.3
I0114 03:07:25.772967 9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
I0114 03:07:25.772974 9157 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 03:07:25.950643 9157 system_pods.go:59] 7 kube-system pods found
I0114 03:07:25.950657 9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:25.950661 9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:25.950665 9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:25.950678 9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:25.950683 9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:25.950690 9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:25.950696 9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:25.950700 9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
I0114 03:07:25.950706 9157 default_sa.go:34] waiting for default service account to be created ...
I0114 03:07:26.149504 9157 default_sa.go:45] found service account: "default"
I0114 03:07:26.149520 9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
I0114 03:07:26.149525 9157 system_pods.go:116] waiting for k8s-apps to be running ...
I0114 03:07:26.350967 9157 system_pods.go:86] 7 kube-system pods found
I0114 03:07:26.350980 9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:26.350985 9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:26.350988 9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:26.350992 9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:26.350999 9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:26.351005 9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:26.351011 9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:26.351017 9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
I0114 03:07:26.351034 9157 system_svc.go:44] waiting for kubelet service to be running ....
I0114 03:07:26.351110 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:26.360848 9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
I0114 03:07:26.360864 9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0114 03:07:26.360876 9157 node_conditions.go:102] verifying NodePressure condition ...
I0114 03:07:26.549739 9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 03:07:26.549755 9157 node_conditions.go:123] node cpu capacity is 2
I0114 03:07:26.549762 9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
I0114 03:07:26.549769 9157 start.go:217] waiting for startup goroutines ...
I0114 03:07:26.550105 9157 ssh_runner.go:195] Run: rm -f paused
I0114 03:07:26.590700 9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0114 03:07:26.635229 9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:27 UTC. --
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.331007077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/00896af5ccd623a628f391767307c1a9d45e32343eddc996b752a9c7139727f6 pid=6084 runtime=io.containerd.runc.v2
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333349197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333448259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333458160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333734685Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64b687a4b262b3705a237a5e8f1c05480509b41de28c1a76e6d5f8534499eed9 pid=6100 runtime=io.containerd.runc.v2
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348340304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348409628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348419175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348574713Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6bf08f44884c29bce8afaaee8a369ca1553b77a2f3f362f87893bed08be8580e pid=6134 runtime=io.containerd.runc.v2
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627815369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627899169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627909740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.629711389Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7a4778602ca817386ceb6b83b0cffa2e4273ed22dec5e1bd6af016c2cdbbc152 pid=6375 runtime=io.containerd.runc.v2
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635505843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635574619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635585017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635881814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3fdcdd87125fc45218e55627224d289bb364f4e26591a574d4711c1e2bf755db pid=6391 runtime=io.containerd.runc.v2
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738126883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738274110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738296575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738419462Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b511107c0d65ed1187a9182a9b33f82bfbf4fa8cfee81c4ebdc2d2c2fc5ecc42 pid=6710 runtime=io.containerd.runc.v2
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035767688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035869182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035879278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.036327785Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4747fe303fd10345c6f83fc3afdd096d34c7cd162e74c11660dbc35198c8c91a pid=6755 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4747fe303fd10 6e38f40d628db 3 seconds ago Running storage-provisioner 0 b511107c0d65e
3fdcdd87125fc beaaf00edd38a 18 seconds ago Running kube-proxy 3 4e57c85660d83
7a4778602ca81 5185b96f0becf 18 seconds ago Running coredns 2 8919d849501d6
6bf08f44884c2 6d23ec0e8b87e 23 seconds ago Running kube-scheduler 3 687228c21ca63
64b687a4b262b 6039992312758 23 seconds ago Running kube-controller-manager 3 832b08b9a62e2
fa0ae81988fe7 0346dbd74bcb9 23 seconds ago Running kube-apiserver 3 ff4b3ee4f8ae5
00896af5ccd62 a8a176a5d5d69 23 seconds ago Running etcd 3 ecaeb9f764e75
a91b8dbf52b28 beaaf00edd38a 36 seconds ago Created kube-proxy 2 be1781a847e83
4ef492042630b 5185b96f0becf 36 seconds ago Exited coredns 1 5d6ae273017b7
1f0472740d8e5 a8a176a5d5d69 36 seconds ago Exited etcd 2 9307465ae5847
8cfdb196b1427 6039992312758 36 seconds ago Exited kube-controller-manager 2 c7561d6051ce8
ec5b05843edc6 0346dbd74bcb9 36 seconds ago Exited kube-apiserver 2 a1988593cada4
d1df9d20a995d 6d23ec0e8b87e 36 seconds ago Exited kube-scheduler 2 76689e83a5147
*
* ==> coredns [4ef492042630] <==
* [INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/errors: 2 8922087648600135430.3435341938167049804. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
*
* ==> coredns [7a4778602ca8] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: pause-030526
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-030526
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=pause-030526
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T03_06_03_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 11:06:01 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-030526
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 11:07:18 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:07:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.24
Hostname: pause-030526
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 5158a2f1d68b4728bdca3e981e3d16f1
System UUID: 59a511ed-0000-0000-93df-149d997cd0f1
Boot ID: 7071b7f0-575a-4ffd-bad0-919bd7ad3180
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-wk8g2 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 74s
kube-system etcd-pause-030526 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 86s
kube-system kube-apiserver-pause-030526 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 86s
kube-system kube-controller-manager-pause-030526 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 86s
kube-system kube-proxy-9lkcj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 74s
kube-system kube-scheduler-pause-030526 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 86s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 71s kube-proxy
Normal Starting 18s kube-proxy
Normal Starting 51s kube-proxy
Normal NodeAllocatableEnforced 100s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 99s (x7 over 100s) kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 99s (x6 over 100s) kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 99s (x6 over 100s) kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 86s kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 86s kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 86s kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeReady 86s kubelet Node pause-030526 status is now: NodeReady
Normal NodeAllocatableEnforced 86s kubelet Updated Node Allocatable limit across pods
Normal Starting 86s kubelet Starting kubelet.
Normal RegisteredNode 75s node-controller Node pause-030526 event: Registered Node pause-030526 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 8s node-controller Node pause-030526 event: Registered Node pause-030526 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.896084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.842858] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +0.089665] systemd-fstab-generator[541]: Ignoring "noauto" for root device
[ +5.167104] systemd-fstab-generator[762]: Ignoring "noauto" for root device
[ +1.234233] kauditd_printk_skb: 16 callbacks suppressed
[ +0.224985] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.092006] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.090717] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.460171] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
[ +0.081044] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
[ +2.991024] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
[ +0.466189] kauditd_printk_skb: 68 callbacks suppressed
[Jan14 11:06] systemd-fstab-generator[2009]: Ignoring "noauto" for root device
[ +12.288147] kauditd_printk_skb: 8 callbacks suppressed
[ +11.014225] kauditd_printk_skb: 18 callbacks suppressed
[ +4.097840] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
[ +0.157534] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
[ +0.143509] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
[ +17.238898] systemd-fstab-generator[4389]: Ignoring "noauto" for root device
[ +0.099215] systemd-fstab-generator[4443]: Ignoring "noauto" for root device
[Jan14 11:07] kauditd_printk_skb: 31 callbacks suppressed
[ +1.304783] systemd-fstab-generator[5886]: Ignoring "noauto" for root device
*
* ==> etcd [00896af5ccd6] <==
* {"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"db97d05830b4a428","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-14T11:07:05.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 switched to configuration voters=(15823344892982371368)"}
{"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","added-peer-id":"db97d05830b4a428","added-peer-peer-urls":["https://192.168.64.24:2380"]}
{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T11:07:05.157Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"db97d05830b4a428","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db97d05830b4a428","initial-advertise-peer-urls":["https://192.168.64.24:2380"],"listen-peer-urls":["https://192.168.64.24:2380"],"advertise-client-urls":["https://192.168.64.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.24:2380"}
{"level":"info","ts":"2023-01-14T11:07:05.170Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.24:2380"}
{"level":"info","ts":"2023-01-14T11:07:06.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 is starting a new election at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgPreVoteResp from db97d05830b4a428 at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became candidate at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgVoteResp from db97d05830b4a428 at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became leader at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db97d05830b4a428 elected leader db97d05830b4a428 at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"db97d05830b4a428","local-member-attributes":"{Name:pause-030526 ClientURLs:[https://192.168.64.24:2379]}","request-path":"/0/members/db97d05830b4a428/attributes","cluster-id":"f9c405dda3109066","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.24:2379"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> etcd [1f0472740d8e] <==
*
*
* ==> kernel <==
* 11:07:28 up 2 min, 0 users, load average: 0.60, 0.29, 0.11
Linux pause-030526 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [ec5b05843edc] <==
*
*
* ==> kube-apiserver [fa0ae81988fe] <==
* I0114 11:07:07.835235 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 11:07:07.835321 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 11:07:07.835731 1 autoregister_controller.go:141] Starting autoregister controller
I0114 11:07:07.835826 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0114 11:07:07.856121 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0114 11:07:07.856531 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0114 11:07:07.858292 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0114 11:07:07.858319 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0114 11:07:07.958455 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0114 11:07:08.030449 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0114 11:07:08.031216 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0114 11:07:08.032075 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0114 11:07:08.032931 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0114 11:07:08.035463 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0114 11:07:08.035912 1 cache.go:39] Caches are synced for autoregister controller
I0114 11:07:08.037457 1 shared_informer.go:262] Caches are synced for node_authorizer
I0114 11:07:08.631959 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0114 11:07:08.834884 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0114 11:07:09.433088 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0114 11:07:09.439315 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0114 11:07:09.467659 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0114 11:07:09.481246 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0114 11:07:09.492033 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0114 11:07:20.415432 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0114 11:07:20.584645 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [64b687a4b262] <==
* I0114 11:07:20.459506 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0114 11:07:20.462354 1 shared_informer.go:262] Caches are synced for expand
I0114 11:07:20.462369 1 shared_informer.go:262] Caches are synced for namespace
I0114 11:07:20.462460 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0114 11:07:20.465035 1 shared_informer.go:262] Caches are synced for ReplicationController
I0114 11:07:20.469843 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0114 11:07:20.469889 1 shared_informer.go:262] Caches are synced for TTL
I0114 11:07:20.472294 1 shared_informer.go:262] Caches are synced for taint
I0114 11:07:20.472382 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0114 11:07:20.472447 1 taint_manager.go:209] "Sending events to api server"
I0114 11:07:20.472424 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0114 11:07:20.472799 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-030526. Assuming now as a timestamp.
I0114 11:07:20.472929 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0114 11:07:20.473272 1 event.go:294] "Event occurred" object="pause-030526" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-030526 event: Registered Node pause-030526 in Controller"
I0114 11:07:20.480533 1 shared_informer.go:262] Caches are synced for daemon sets
I0114 11:07:20.490072 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0114 11:07:20.496927 1 shared_informer.go:262] Caches are synced for HPA
I0114 11:07:20.574770 1 shared_informer.go:262] Caches are synced for endpoint
I0114 11:07:20.589017 1 shared_informer.go:262] Caches are synced for disruption
I0114 11:07:20.591967 1 shared_informer.go:262] Caches are synced for resource quota
I0114 11:07:20.598516 1 shared_informer.go:262] Caches are synced for stateful set
I0114 11:07:20.621015 1 shared_informer.go:262] Caches are synced for resource quota
I0114 11:07:21.005895 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 11:07:21.066929 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 11:07:21.067007 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [8cfdb196b142] <==
*
*
* ==> kube-proxy [3fdcdd87125f] <==
* I0114 11:07:09.764942 1 node.go:163] Successfully retrieved node IP: 192.168.64.24
I0114 11:07:09.765007 1 server_others.go:138] "Detected node IP" address="192.168.64.24"
I0114 11:07:09.765022 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 11:07:09.789595 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0114 11:07:09.789674 1 server_others.go:206] "Using iptables Proxier"
I0114 11:07:09.789705 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 11:07:09.789866 1 server.go:661] "Version info" version="v1.25.3"
I0114 11:07:09.789895 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:07:09.791171 1 config.go:317] "Starting service config controller"
I0114 11:07:09.791204 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 11:07:09.791233 1 config.go:226] "Starting endpoint slice config controller"
I0114 11:07:09.791257 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 11:07:09.792096 1 config.go:444] "Starting node config controller"
I0114 11:07:09.792122 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 11:07:09.892056 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 11:07:09.892223 1 shared_informer.go:262] Caches are synced for node config
I0114 11:07:09.892063 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [a91b8dbf52b2] <==
*
*
* ==> kube-scheduler [6bf08f44884c] <==
* I0114 11:07:05.785495 1 serving.go:348] Generated self-signed cert in-memory
W0114 11:07:07.930966 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0114 11:07:07.931087 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0114 11:07:07.931149 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 11:07:07.931342 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 11:07:07.946916 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 11:07:07.946999 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:07:07.947953 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 11:07:07.948063 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 11:07:07.949707 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:07:07.948083 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0114 11:07:07.964273 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0114 11:07:07.964466 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0114 11:07:07.964647 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0114 11:07:07.964697 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0114 11:07:07.964834 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0114 11:07:07.964957 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0114 11:07:08.050308 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [d1df9d20a995] <==
* I0114 11:06:52.791744 1 serving.go:348] Generated self-signed cert in-memory
W0114 11:06:53.280219 1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.64.24:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.64.24:8443: connect: connection refused
W0114 11:06:53.280234 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 11:06:53.280239 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 11:06:53.282436 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 11:06:53.282467 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:06:53.284472 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 11:06:53.284543 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 11:06:53.284551 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:06:53.284723 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 11:06:53.284834 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0114 11:06:53.284988 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0114 11:06:53.285446 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:06:53.285486 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0114 11:06:53.285807 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:29 UTC. --
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.304093 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.404883 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.505492 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.606226 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.706862 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.807067 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.908125 5892 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.909327 5892 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055387 5892 kubelet_node_status.go:108] "Node was previously registered" node="pause-030526"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055555 5892 kubelet_node_status.go:73] "Successfully registered node" node="pause-030526"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.675696 5892 apiserver.go:52] "Watching apiserver"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677949 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677999 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821148 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x7q\" (UniqueName: \"kubernetes.io/projected/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-kube-api-access-72x7q\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821518 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-proxy\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821683 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-config-volume\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821740 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-xtables-lock\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821852 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-lib-modules\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821958 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt7j\" (UniqueName: \"kubernetes.io/projected/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-api-access-zmt7j\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.822000 5892 reconciler.go:169] "Reconciler: start to sync state"
Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.578961 5892 scope.go:115] "RemoveContainer" containerID="a91b8dbf52b2899bfa63a86f3b29f268678711d37ac71fba7ef99acfabef6696"
Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.579108 5892 scope.go:115] "RemoveContainer" containerID="4ef492042630b948c5a7cf8834310194a4c1a14d0407a74904076077074843a0"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.345531 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451536 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14a8b558-cad1-44aa-8434-e31a93fcc6e0-tmp\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451687 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxk6\" (UniqueName: \"kubernetes.io/projected/14a8b558-cad1-44aa-8434-e31a93fcc6e0-kube-api-access-rjxk6\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [4747fe303fd1] <==
* I0114 11:07:25.092946 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 11:07:25.101139 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 11:07:25.101183 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 11:07:25.105549 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 11:07:25.106105 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
I0114 11:07:25.107230 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"578576fd-279f-4e3d-946a-2f8e3400fd7a", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9 became leader
I0114 11:07:25.207006 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-030526 -n pause-030526
helpers_test.go:261: (dbg) Run: kubectl --context pause-030526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-030526 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-030526 describe pod : exit status 1 (41.000291ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-030526 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-030526 -n pause-030526
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-030526 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-030526 logs -n 25: (2.548410029s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
| stop | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | |
| | --schedule 15s | | | | | |
| stop | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
| | --schedule 15s | | | | | |
| delete | -p scheduled-stop-025204 | scheduled-stop-025204 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:53 PST |
| start | -p skaffold-025353 | skaffold-025353 | jenkins | v1.28.0 | 14 Jan 23 02:53 PST | 14 Jan 23 02:54 PST |
| | --memory=2600 | | | | | |
| | --driver=hyperkit | | | | | |
| docker-env | --shell none -p | skaffold-025353 | skaffold | v1.28.0 | 14 Jan 23 02:54 PST | 14 Jan 23 02:54 PST |
| | skaffold-025353 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p skaffold-025353 | skaffold-025353 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 02:55 PST |
| start | -p offline-docker-025507 | offline-docker-025507 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p auto-025507 --memory=2048 | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 02:55 PST | 14 Jan 23 03:02 PST |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p auto-025507 pgrep -a | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| | kubelet | | | | | |
| delete | -p offline-docker-025507 | offline-docker-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:03 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p auto-025507 | auto-025507 | jenkins | v1.28.0 | 14 Jan 23 03:02 PST | 14 Jan 23 03:02 PST |
| stop | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:03 PST |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:03 PST | 14 Jan 23 03:04 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p stopped-upgrade-030226 | stopped-upgrade-030226 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:05 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p kubernetes-upgrade-030216 | kubernetes-upgrade-030216 | jenkins | v1.28.0 | 14 Jan 23 03:04 PST | 14 Jan 23 03:04 PST |
| delete | -p stopped-upgrade-030226 | stopped-upgrade-030226 | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:05 PST |
| start | -p pause-030526 --memory=2048 | pause-030526 | jenkins | v1.28.0 | 14 Jan 23 03:05 PST | 14 Jan 23 03:06 PST |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p running-upgrade-030435 | running-upgrade-030435 | jenkins | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-030526 | pause-030526 | jenkins | v1.28.0 | 14 Jan 23 03:06 PST | 14 Jan 23 03:07 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p running-upgrade-030435 | running-upgrade-030435 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | 14 Jan 23 03:07 PST |
| start | -p NoKubernetes-030718 | NoKubernetes-030718 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p NoKubernetes-030718 | NoKubernetes-030718 | jenkins | v1.28.0 | 14 Jan 23 03:07 PST | |
| | --driver=hyperkit | | | | | |
|------------|--------------------------------|---------------------------|----------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/14 03:07:19
Running on machine: MacOS-Agent-1
Binary: Built with gc go1.19.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0114 03:07:19.019830 9247 out.go:296] Setting OutFile to fd 1 ...
I0114 03:07:19.020100 9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:07:19.020104 9247 out.go:309] Setting ErrFile to fd 2...
I0114 03:07:19.020107 9247 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0114 03:07:19.020220 9247 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15642-1627/.minikube/bin
I0114 03:07:19.020722 9247 out.go:303] Setting JSON to false
I0114 03:07:19.039498 9247 start.go:125] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":4012,"bootTime":1673690427,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0114 03:07:19.039605 9247 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0114 03:07:19.077571 9247 out.go:177] * [NoKubernetes-030718] minikube v1.28.0 on Darwin 13.0.1
I0114 03:07:19.136784 9247 notify.go:220] Checking for updates...
I0114 03:07:19.174094 9247 out.go:177] - MINIKUBE_LOCATION=15642
I0114 03:07:19.232648 9247 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:07:19.306906 9247 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0114 03:07:19.328006 9247 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0114 03:07:19.348934 9247 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15642-1627/.minikube
I0114 03:07:19.370518 9247 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:07:19.370581 9247 driver.go:365] Setting default libvirt URI to qemu:///system
I0114 03:07:19.398770 9247 out.go:177] * Using the hyperkit driver based on user configuration
I0114 03:07:19.441028 9247 start.go:294] selected driver: hyperkit
I0114 03:07:19.441047 9247 start.go:838] validating driver "hyperkit" against <nil>
I0114 03:07:19.441077 9247 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0114 03:07:19.441203 9247 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:07:19.441421 9247 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15642-1627/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0114 03:07:19.449241 9247 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0114 03:07:19.452425 9247 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:19.452438 9247 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0114 03:07:19.452505 9247 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0114 03:07:19.454697 9247 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
I0114 03:07:19.454829 9247 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
I0114 03:07:19.454850 9247 cni.go:95] Creating CNI manager for ""
I0114 03:07:19.454857 9247 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0114 03:07:19.454867 9247 start_flags.go:319] config:
{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
I0114 03:07:19.454983 9247 iso.go:125] acquiring lock: {Name:mkf812bef4e208b28a360507a7c86d17e208f6c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0114 03:07:19.497050 9247 out.go:177] * Starting control plane node NoKubernetes-030718 in cluster NoKubernetes-030718
I0114 03:07:19.518880 9247 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0114 03:07:19.518967 9247 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0114 03:07:19.518991 9247 cache.go:57] Caching tarball of preloaded images
I0114 03:07:19.519216 9247 preload.go:174] Found /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0114 03:07:19.519232 9247 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0114 03:07:19.519384 9247 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json ...
I0114 03:07:19.519444 9247 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/NoKubernetes-030718/config.json: {Name:mk5caec35ff8fcf3d9c5465ac05bd2e53369341a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:19.519982 9247 cache.go:193] Successfully downloaded all kic artifacts
I0114 03:07:19.520014 9247 start.go:364] acquiring machines lock for NoKubernetes-030718: {Name:mkd798b4eb4b12534fdc8a3119639005936a788a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0114 03:07:19.520101 9247 start.go:368] acquired machines lock for "NoKubernetes-030718" in 77µs
I0114 03:07:19.520133 9247 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-030718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-030718 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 03:07:19.520191 9247 start.go:125] createHost starting for "" (driver="hyperkit")
I0114 03:07:19.342684 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:19.342697 9157 pod_ready.go:81] duration metric: took 10.009021387s waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:19.342705 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:21.351765 9157 pod_ready.go:102] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"False"
I0114 03:07:23.350513 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.350547 9157 pod_ready.go:81] duration metric: took 4.007860495s waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.350554 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353476 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.353485 9157 pod_ready.go:81] duration metric: took 2.925304ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.353490 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356134 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.356142 9157 pod_ready.go:81] duration metric: took 2.647244ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.356148 9157 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358793 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.358800 9157 pod_ready.go:81] duration metric: took 2.641458ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.358804 9157 pod_ready.go:38] duration metric: took 14.032386778s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.358813 9157 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0114 03:07:23.366176 9157 ops.go:34] apiserver oom_adj: -16
I0114 03:07:23.366186 9157 kubeadm.go:631] restartCluster took 35.017662843s
I0114 03:07:23.366207 9157 kubeadm.go:398] StartCluster complete in 35.039471935s
I0114 03:07:23.366217 9157 settings.go:142] acquiring lock: {Name:mk0c64d56bf3ff3479e8fa9f559b4f9cf25d55df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.366305 9157 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15642-1627/kubeconfig
I0114 03:07:23.366836 9157 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15642-1627/kubeconfig: {Name:mk9e4b5f5c881bca46b5d9046e1e4e38df78e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0114 03:07:23.367658 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.369507 9157 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-030526" rescaled to 1
I0114 03:07:23.369535 9157 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.24 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0114 03:07:23.369542 9157 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0114 03:07:23.369576 9157 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0114 03:07:23.369692 9157 config.go:180] Loaded profile config "pause-030526": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0114 03:07:23.390480 9157 out.go:177] * Verifying Kubernetes components...
I0114 03:07:23.390629 9157 addons.go:65] Setting storage-provisioner=true in profile "pause-030526"
I0114 03:07:23.433350 9157 addons.go:227] Setting addon storage-provisioner=true in "pause-030526"
I0114 03:07:23.390632 9157 addons.go:65] Setting default-storageclass=true in profile "pause-030526"
W0114 03:07:23.433358 9157 addons.go:236] addon storage-provisioner should already be in state true
I0114 03:07:23.433392 9157 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-030526"
I0114 03:07:23.433406 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:23.430373 9157 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0114 03:07:23.433421 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.433815 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433877 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.433873 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.433900 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.442841 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52806
I0114 03:07:23.443203 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52808
I0114 03:07:23.443537 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443728 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.443899 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.443908 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444057 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.444066 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.444119 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444329 9157 node_ready.go:35] waiting up to 6m0s for node "pause-030526" to be "Ready" ...
I0114 03:07:23.444380 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.444587 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.444602 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.444609 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.444705 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.445301 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.447147 9157 node_ready.go:49] node "pause-030526" has status "Ready":"True"
I0114 03:07:23.447164 9157 node_ready.go:38] duration metric: took 2.815218ms waiting for node "pause-030526" to be "Ready" ...
I0114 03:07:23.447169 9157 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:23.447225 9157 kapi.go:59] client config for pause-030526: &rest.Config{Host:"https://192.168.64.24:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/profiles/pause-030526/client.key", CAFile:"/Users/jenkins/minikube-integration/15642-1627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448cc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0114 03:07:23.450515 9157 addons.go:227] Setting addon default-storageclass=true in "pause-030526"
W0114 03:07:23.450531 9157 addons.go:236] addon default-storageclass should already be in state true
I0114 03:07:23.450551 9157 host.go:66] Checking if "pause-030526" exists ...
I0114 03:07:23.450887 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.450912 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.453524 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52810
I0114 03:07:23.454275 9157 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.454289 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.454742 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.454758 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.455002 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.455108 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.455188 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.455261 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.456200 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.459195 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52812
I0114 03:07:23.477120 9157 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0114 03:07:23.477524 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.498347 9157 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.498358 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0114 03:07:23.498372 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.498499 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.498595 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.498695 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.498707 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.498780 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.498953 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.499031 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.499602 9157 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:23.499665 9157 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:23.508249 9157 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52815
I0114 03:07:23.508606 9157 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:23.509066 9157 main.go:134] libmachine: Using API Version 1
I0114 03:07:23.509081 9157 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:23.509378 9157 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:23.509472 9157 main.go:134] libmachine: (pause-030526) Calling .GetState
I0114 03:07:23.509563 9157 main.go:134] libmachine: (pause-030526) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.509636 9157 main.go:134] libmachine: (pause-030526) DBG | hyperkit pid from json: 8992
I0114 03:07:23.510952 9157 main.go:134] libmachine: (pause-030526) Calling .DriverName
I0114 03:07:23.511144 9157 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:23.511152 9157 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0114 03:07:23.511161 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHHostname
I0114 03:07:23.511250 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHPort
I0114 03:07:23.511331 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHKeyPath
I0114 03:07:23.511433 9157 main.go:134] libmachine: (pause-030526) Calling .GetSSHUsername
I0114 03:07:23.511524 9157 sshutil.go:53] new ssh client: &{IP:192.168.64.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/pause-030526/id_rsa Username:docker}
I0114 03:07:23.553319 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0114 03:07:23.563588 9157 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0114 03:07:19.541712 9247 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I0114 03:07:19.542144 9247 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0114 03:07:19.542218 9247 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0114 03:07:19.550680 9247 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52804
I0114 03:07:19.551062 9247 main.go:134] libmachine: () Calling .GetVersion
I0114 03:07:19.551455 9247 main.go:134] libmachine: Using API Version 1
I0114 03:07:19.551463 9247 main.go:134] libmachine: () Calling .SetConfigRaw
I0114 03:07:19.551681 9247 main.go:134] libmachine: () Calling .GetMachineName
I0114 03:07:19.551780 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetMachineName
I0114 03:07:19.551847 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .DriverName
I0114 03:07:19.551975 9247 start.go:159] libmachine.API.Create for "NoKubernetes-030718" (driver="hyperkit")
I0114 03:07:19.552002 9247 client.go:168] LocalClient.Create starting
I0114 03:07:19.552038 9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/ca.pem
I0114 03:07:19.552082 9247 main.go:134] libmachine: Decoding PEM data...
I0114 03:07:19.552098 9247 main.go:134] libmachine: Parsing certificate...
I0114 03:07:19.552156 9247 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15642-1627/.minikube/certs/cert.pem
I0114 03:07:19.552191 9247 main.go:134] libmachine: Decoding PEM data...
I0114 03:07:19.552202 9247 main.go:134] libmachine: Parsing certificate...
I0114 03:07:19.552213 9247 main.go:134] libmachine: Running pre-create checks...
I0114 03:07:19.552220 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .PreCreateCheck
I0114 03:07:19.552322 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.552519 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .GetConfigRaw
I0114 03:07:19.552961 9247 main.go:134] libmachine: Creating machine...
I0114 03:07:19.552966 9247 main.go:134] libmachine: (NoKubernetes-030718) Calling .Create
I0114 03:07:19.553050 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.553180 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.553037 9257 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15642-1627/.minikube
I0114 03:07:19.553267 9247 main.go:134] libmachine: (NoKubernetes-030718) Downloading /Users/jenkins/minikube-integration/15642-1627/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15642-1627/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso...
I0114 03:07:19.718817 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.718695 9257 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/id_rsa...
I0114 03:07:19.778524 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.778469 9257 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk...
I0114 03:07:19.778533 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing magic tar header
I0114 03:07:19.778627 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Writing SSH key tar header
I0114 03:07:19.779210 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | I0114 03:07:19.779155 9257 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718 ...
I0114 03:07:19.950626 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.950641 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid
I0114 03:07:19.950650 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Using UUID 9cd1b71a-93fb-11ed-97d5-149d997cd0f1
I0114 03:07:19.972204 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Generated MAC aa:b9:cb:46:9b:fa
I0114 03:07:19.972218 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718
I0114 03:07:19.972248 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0114 03:07:19.972285 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9cd1b71a-93fb-11ed-97d5-149d997cd0f1", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000182bd0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage", Initrd:"/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0114 03:07:19.972374 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9cd1b71a-93fb-11ed-97d5-149d997cd0f1", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/m
inikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"}
I0114 03:07:19.972409 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9cd1b71a-93fb-11ed-97d5-149d997cd0f1 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/NoKubernetes-030718.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/tty,log=/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/console-ring -f kexec,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/bzimage,/Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes
-030718/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-030718"
I0114 03:07:19.972414 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0114 03:07:19.973744 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 DEBUG: hyperkit: Pid is 9258
I0114 03:07:19.974155 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 0
I0114 03:07:19.974164 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:19.974238 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:19.975810 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:19.976175 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:19.976189 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:19.976220 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:19.976231 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:19.976241 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:19.976250 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:19.976259 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:19.976268 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:19.976274 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:19.976280 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:19.976296 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:19.976308 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:19.976317 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:19.976325 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:19.976336 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:19.976344 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:19.976352 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:19.976359 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:19.976380 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:19.976389 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:19.976395 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:19.976400 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:19.976423 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:19.976436 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:19.980329 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0114 03:07:19.989638 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15642-1627/.minikube/machines/NoKubernetes-030718/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0114 03:07:19.990267 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0114 03:07:19.990287 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0114 03:07:19.990297 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0114 03:07:19.990312 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0114 03:07:20.551946 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0114 03:07:20.551964 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0114 03:07:20.657062 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0114 03:07:20.657088 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0114 03:07:20.657094 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0114 03:07:20.657104 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0114 03:07:20.657933 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0114 03:07:20.657941 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0114 03:07:21.977330 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 1
I0114 03:07:21.977341 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:21.977398 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:21.978146 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:21.978288 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:21.978295 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:21.978302 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:21.978329 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:21.978337 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:21.978342 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:21.978348 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:21.978368 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:21.978374 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:21.978398 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:21.978413 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:21.978423 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:21.978433 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:21.978438 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:21.978445 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:21.978490 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:21.978515 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:21.978525 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:21.978530 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:21.978536 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:21.978543 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:21.978549 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:21.978555 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:21.978562 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:23.979244 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 2
I0114 03:07:23.979260 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:23.979337 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:23.980113 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:23.980187 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:23.980199 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:23.980208 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:23.980214 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:23.980233 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:23.980241 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:23.980250 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:23.980259 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:23.980265 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:23.980277 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:23.980291 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:23.980298 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:23.980309 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:23.980316 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:23.980322 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:23.980327 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:23.980336 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:23.980347 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:23.980354 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:23.980363 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:23.980369 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:23.980378 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:23.980384 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:23.980391 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:23.749533 9157 pod_ready.go:92] pod "coredns-565d847f94-wk8g2" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:23.749544 9157 pod_ready.go:81] duration metric: took 295.256786ms waiting for pod "coredns-565d847f94-wk8g2" in "kube-system" namespace to be "Ready" ...
I0114 03:07:23.749553 9157 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149706 9157 pod_ready.go:92] pod "etcd-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.149731 9157 pod_ready.go:81] duration metric: took 400.160741ms waiting for pod "etcd-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.149737 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.158190 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158207 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158210 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158221 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158392 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158444 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158456 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158458 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158461 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158483 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158469 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158502 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158508 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158527 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158704 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158710 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158718 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158730 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.158738 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.158735 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.158751 9157 main.go:134] libmachine: Making call to close driver server
I0114 03:07:24.158759 9157 main.go:134] libmachine: (pause-030526) Calling .Close
I0114 03:07:24.158908 9157 main.go:134] libmachine: (pause-030526) DBG | Closing plugin on server side
I0114 03:07:24.159011 9157 main.go:134] libmachine: Successfully made call to close driver server
I0114 03:07:24.159025 9157 main.go:134] libmachine: Making call to close connection to plugin binary
I0114 03:07:24.179920 9157 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0114 03:07:24.200426 9157 addons.go:488] enableAddons completed in 830.850832ms
I0114 03:07:24.550392 9157 pod_ready.go:92] pod "kube-apiserver-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.550424 9157 pod_ready.go:81] duration metric: took 400.664842ms waiting for pod "kube-apiserver-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.550431 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949214 9157 pod_ready.go:92] pod "kube-controller-manager-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:24.949226 9157 pod_ready.go:81] duration metric: took 398.790966ms waiting for pod "kube-controller-manager-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:24.949237 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350138 9157 pod_ready.go:92] pod "kube-proxy-9lkcj" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.350151 9157 pod_ready.go:81] duration metric: took 400.910872ms waiting for pod "kube-proxy-9lkcj" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.350162 9157 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749166 9157 pod_ready.go:92] pod "kube-scheduler-pause-030526" in "kube-system" namespace has status "Ready":"True"
I0114 03:07:25.749177 9157 pod_ready.go:81] duration metric: took 399.012421ms waiting for pod "kube-scheduler-pause-030526" in "kube-system" namespace to be "Ready" ...
I0114 03:07:25.749184 9157 pod_ready.go:38] duration metric: took 2.302012184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0114 03:07:25.749196 9157 api_server.go:51] waiting for apiserver process to appear ...
I0114 03:07:25.749260 9157 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0114 03:07:25.765950 9157 api_server.go:71] duration metric: took 2.396412835s to wait for apiserver process to appear ...
I0114 03:07:25.765970 9157 api_server.go:87] waiting for apiserver healthz status ...
I0114 03:07:25.765977 9157 api_server.go:252] Checking apiserver healthz at https://192.168.64.24:8443/healthz ...
I0114 03:07:25.772427 9157 api_server.go:278] https://192.168.64.24:8443/healthz returned 200:
ok
I0114 03:07:25.772956 9157 api_server.go:140] control plane version: v1.25.3
I0114 03:07:25.772967 9157 api_server.go:130] duration metric: took 6.991805ms to wait for apiserver health ...
I0114 03:07:25.772974 9157 system_pods.go:43] waiting for kube-system pods to appear ...
I0114 03:07:25.950643 9157 system_pods.go:59] 7 kube-system pods found
I0114 03:07:25.950657 9157 system_pods.go:61] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:25.950661 9157 system_pods.go:61] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:25.950665 9157 system_pods.go:61] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:25.950678 9157 system_pods.go:61] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:25.950683 9157 system_pods.go:61] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:25.950690 9157 system_pods.go:61] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:25.950696 9157 system_pods.go:61] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:25.950700 9157 system_pods.go:74] duration metric: took 177.722556ms to wait for pod list to return data ...
I0114 03:07:25.950706 9157 default_sa.go:34] waiting for default service account to be created ...
I0114 03:07:26.149504 9157 default_sa.go:45] found service account: "default"
I0114 03:07:26.149520 9157 default_sa.go:55] duration metric: took 198.806394ms for default service account to be created ...
I0114 03:07:26.149525 9157 system_pods.go:116] waiting for k8s-apps to be running ...
I0114 03:07:26.350967 9157 system_pods.go:86] 7 kube-system pods found
I0114 03:07:26.350980 9157 system_pods.go:89] "coredns-565d847f94-wk8g2" [eff0eea5-423e-4f30-9cc7-f0a187ccfbe4] Running
I0114 03:07:26.350985 9157 system_pods.go:89] "etcd-pause-030526" [79af2b0d-aa88-4651-8d8f-9d70282bb7ea] Running
I0114 03:07:26.350988 9157 system_pods.go:89] "kube-apiserver-pause-030526" [d5dc7ee3-a3d5-44c6-8927-5d7689e23ce6] Running
I0114 03:07:26.350992 9157 system_pods.go:89] "kube-controller-manager-pause-030526" [80a94c8b-938e-4549-97a9-678b02985b4d] Running
I0114 03:07:26.350999 9157 system_pods.go:89] "kube-proxy-9lkcj" [937abbd6-9bb6-4df5-bda8-a01348c80cfa] Running
I0114 03:07:26.351005 9157 system_pods.go:89] "kube-scheduler-pause-030526" [b5e64f69-f421-456a-8e51-0bf0eaf75a8d] Running
I0114 03:07:26.351011 9157 system_pods.go:89] "storage-provisioner" [14a8b558-cad1-44aa-8434-e31a93fcc6e0] Running
I0114 03:07:26.351017 9157 system_pods.go:126] duration metric: took 201.48912ms to wait for k8s-apps to be running ...
I0114 03:07:26.351034 9157 system_svc.go:44] waiting for kubelet service to be running ....
I0114 03:07:26.351110 9157 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0114 03:07:26.360848 9157 system_svc.go:56] duration metric: took 9.811651ms WaitForService to wait for kubelet.
I0114 03:07:26.360864 9157 kubeadm.go:573] duration metric: took 2.991330205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0114 03:07:26.360876 9157 node_conditions.go:102] verifying NodePressure condition ...
I0114 03:07:26.549739 9157 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0114 03:07:26.549755 9157 node_conditions.go:123] node cpu capacity is 2
I0114 03:07:26.549762 9157 node_conditions.go:105] duration metric: took 188.883983ms to run NodePressure ...
I0114 03:07:26.549769 9157 start.go:217] waiting for startup goroutines ...
I0114 03:07:26.550105 9157 ssh_runner.go:195] Run: rm -f paused
I0114 03:07:26.590700 9157 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0114 03:07:26.635229 9157 out.go:177] * Done! kubectl is now configured to use "pause-030526" cluster and "default" namespace by default
I0114 03:07:25.189357 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
I0114 03:07:25.189405 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
I0114 03:07:25.189411 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | 2023/01/14 03:07:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
I0114 03:07:25.981082 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 3
I0114 03:07:25.981090 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:25.981211 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:25.982406 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:25.982467 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:25.982475 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:25.982483 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:25.982489 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:25.982494 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:25.982499 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:25.982508 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:25.982516 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:25.982523 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:25.982530 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:25.982537 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:25.982543 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:25.982572 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:25.982582 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:25.982591 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:25.982598 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:25.982604 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:25.982609 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:25.982620 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:25.982629 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:25.982637 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:25.982644 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:25.982653 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:25.982660 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
I0114 03:07:27.984344 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Attempt 4
I0114 03:07:27.984372 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0114 03:07:27.984500 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | hyperkit pid from json: 9258
I0114 03:07:27.985325 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Searching for aa:b9:cb:46:9b:fa in /var/db/dhcpd_leases ...
I0114 03:07:27.985469 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | Found 23 entries in /var/db/dhcpd_leases!
I0114 03:07:27.985477 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:1a:11:4f:a1:6e:db ID:1,1a:11:4f:a1:6e:db Lease:0x63c3ddfe}
I0114 03:07:27.985494 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:ce:2c:ac:f7:ed:ae ID:1,ce:2c:ac:f7:ed:ae Lease:0x63c3ddd7}
I0114 03:07:27.985505 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:96:da:f:12:7c:f2 ID:1,96:da:f:12:7c:f2 Lease:0x63c3ddc3}
I0114 03:07:27.985515 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:d2:f4:bd:11:dd:76 ID:1,d2:f4:bd:11:dd:76 Lease:0x63c28c42}
I0114 03:07:27.985521 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:2e:e5:4f:f:5e:6 ID:1,2e:e5:4f:f:5e:6 Lease:0x63c28bc1}
I0114 03:07:27.985527 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:8e:7b:14:29:f7:c6 ID:1,8e:7b:14:29:f7:c6 Lease:0x63c28bb7}
I0114 03:07:27.985532 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:6:5b:1:d4:18:92 ID:1,6:5b:1:d4:18:92 Lease:0x63c28b74}
I0114 03:07:27.985537 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:2a:24:5:87:10:20 ID:1,2a:24:5:87:10:20 Lease:0x63c28a0a}
I0114 03:07:27.985542 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:96:a:1:d1:48:53 ID:1,96:a:1:d1:48:53 Lease:0x63c289a5}
I0114 03:07:27.985551 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:ee:43:14:db:7e:45 ID:1,ee:43:14:db:7e:45 Lease:0x63c3da55}
I0114 03:07:27.985560 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:6e:83:60:c7:cb:4 ID:1,6e:83:60:c7:cb:4 Lease:0x63c288c6}
I0114 03:07:27.985565 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:22:9a:fb:92:46:f1 ID:1,22:9a:fb:92:46:f1 Lease:0x63c28653}
I0114 03:07:27.985586 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:de:18:9:d5:68:d6 ID:1,de:18:9:d5:68:d6 Lease:0x63c288cb}
I0114 03:07:27.985618 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:5e:6f:5:10:ab:29 ID:1,5e:6f:5:10:ab:29 Lease:0x63c288c9}
I0114 03:07:27.985627 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:26:65:35:f5:e7:2 ID:1,26:65:35:f5:e7:2 Lease:0x63c2820f}
I0114 03:07:27.985635 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:d6:bb:2b:34:78:1 ID:1,d6:bb:2b:34:78:1 Lease:0x63c281f9}
I0114 03:07:27.985655 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:32:c3:21:b7:19:cc ID:1,32:c3:21:b7:19:cc Lease:0x63c281d4}
I0114 03:07:27.985691 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:ce:df:1a:3e:3:8a ID:1,ce:df:1a:3e:3:8a Lease:0x63c3d309}
I0114 03:07:27.985704 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:3e:d3:de:c4:f7:eb ID:1,3e:d3:de:c4:f7:eb Lease:0x63c3d2c8}
I0114 03:07:27.985728 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:1a:28:1:9c:82:12 ID:1,1a:28:1:9c:82:12 Lease:0x63c3d214}
I0114 03:07:27.985735 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:92:ab:6d:d7:aa:1e ID:1,92:ab:6d:d7:aa:1e Lease:0x63c3d114}
I0114 03:07:27.985741 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:5a:4f:b9:38:5f:fe ID:1,5a:4f:b9:38:5f:fe Lease:0x63c27f89}
I0114 03:07:27.985746 9247 main.go:134] libmachine: (NoKubernetes-030718) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:ba:ae:dd:d2:6:79 ID:1,ba:ae:dd:d2:6:79 Lease:0x63c27f5b}
*
* ==> Docker <==
* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:30 UTC. --
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.331007077Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/00896af5ccd623a628f391767307c1a9d45e32343eddc996b752a9c7139727f6 pid=6084 runtime=io.containerd.runc.v2
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333349197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333448259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333458160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.333734685Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64b687a4b262b3705a237a5e8f1c05480509b41de28c1a76e6d5f8534499eed9 pid=6100 runtime=io.containerd.runc.v2
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348340304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348409628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348419175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:04 pause-030526 dockerd[3914]: time="2023-01-14T11:07:04.348574713Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6bf08f44884c29bce8afaaee8a369ca1553b77a2f3f362f87893bed08be8580e pid=6134 runtime=io.containerd.runc.v2
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627815369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627899169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.627909740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.629711389Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7a4778602ca817386ceb6b83b0cffa2e4273ed22dec5e1bd6af016c2cdbbc152 pid=6375 runtime=io.containerd.runc.v2
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635505843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635574619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635585017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:09 pause-030526 dockerd[3914]: time="2023-01-14T11:07:09.635881814Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3fdcdd87125fc45218e55627224d289bb364f4e26591a574d4711c1e2bf755db pid=6391 runtime=io.containerd.runc.v2
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738126883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738274110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738296575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:24 pause-030526 dockerd[3914]: time="2023-01-14T11:07:24.738419462Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/b511107c0d65ed1187a9182a9b33f82bfbf4fa8cfee81c4ebdc2d2c2fc5ecc42 pid=6710 runtime=io.containerd.runc.v2
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035767688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035869182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.035879278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 14 11:07:25 pause-030526 dockerd[3914]: time="2023-01-14T11:07:25.036327785Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4747fe303fd10345c6f83fc3afdd096d34c7cd162e74c11660dbc35198c8c91a pid=6755 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4747fe303fd10 6e38f40d628db 7 seconds ago Running storage-provisioner 0 b511107c0d65e
3fdcdd87125fc beaaf00edd38a 22 seconds ago Running kube-proxy 3 4e57c85660d83
7a4778602ca81 5185b96f0becf 22 seconds ago Running coredns 2 8919d849501d6
6bf08f44884c2 6d23ec0e8b87e 27 seconds ago Running kube-scheduler 3 687228c21ca63
64b687a4b262b 6039992312758 27 seconds ago Running kube-controller-manager 3 832b08b9a62e2
fa0ae81988fe7 0346dbd74bcb9 27 seconds ago Running kube-apiserver 3 ff4b3ee4f8ae5
00896af5ccd62 a8a176a5d5d69 27 seconds ago Running etcd 3 ecaeb9f764e75
a91b8dbf52b28 beaaf00edd38a 40 seconds ago Created kube-proxy 2 be1781a847e83
4ef492042630b 5185b96f0becf 40 seconds ago Exited coredns 1 5d6ae273017b7
1f0472740d8e5 a8a176a5d5d69 40 seconds ago Exited etcd 2 9307465ae5847
8cfdb196b1427 6039992312758 40 seconds ago Exited kube-controller-manager 2 c7561d6051ce8
ec5b05843edc6 0346dbd74bcb9 40 seconds ago Exited kube-apiserver 2 a1988593cada4
d1df9d20a995d 6d23ec0e8b87e 40 seconds ago Exited kube-scheduler 2 76689e83a5147
*
* ==> coredns [4ef492042630] <==
* [INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] plugin/health: Going into lameduck mode for 5s
[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
[ERROR] plugin/errors: 2 8922087648600135430.3435341938167049804. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
*
* ==> coredns [7a4778602ca8] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> describe nodes <==
* Name: pause-030526
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-030526
kubernetes.io/os=linux
minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
minikube.k8s.io/name=pause-030526
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_14T03_06_03_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 14 Jan 2023 11:06:01 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-030526
AcquireTime: <unset>
RenewTime: Sat, 14 Jan 2023 11:07:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:06:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 14 Jan 2023 11:07:08 +0000 Sat, 14 Jan 2023 11:07:08 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.24
Hostname: pause-030526
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 5158a2f1d68b4728bdca3e981e3d16f1
System UUID: 59a511ed-0000-0000-93df-149d997cd0f1
Boot ID: 7071b7f0-575a-4ffd-bad0-919bd7ad3180
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-wk8g2 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 77s
kube-system etcd-pause-030526 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-apiserver-pause-030526 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-controller-manager-pause-030526 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system kube-proxy-9lkcj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 77s
kube-system kube-scheduler-pause-030526 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 89s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 75s kube-proxy
Normal Starting 21s kube-proxy
Normal Starting 54s kube-proxy
Normal NodeAllocatableEnforced 103s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 102s (x7 over 103s) kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 102s (x6 over 103s) kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 102s (x6 over 103s) kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeHasSufficientPID 89s kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 89s kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 89s kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeReady 89s kubelet Node pause-030526 status is now: NodeReady
Normal NodeAllocatableEnforced 89s kubelet Updated Node Allocatable limit across pods
Normal Starting 89s kubelet Starting kubelet.
Normal RegisteredNode 78s node-controller Node pause-030526 event: Registered Node pause-030526 in Controller
Normal Starting 28s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 28s (x8 over 28s) kubelet Node pause-030526 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 28s (x8 over 28s) kubelet Node pause-030526 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 28s (x7 over 28s) kubelet Node pause-030526 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 28s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 11s node-controller Node pause-030526 event: Registered Node pause-030526 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.896084] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.842858] systemd-fstab-generator[530]: Ignoring "noauto" for root device
[ +0.089665] systemd-fstab-generator[541]: Ignoring "noauto" for root device
[ +5.167104] systemd-fstab-generator[762]: Ignoring "noauto" for root device
[ +1.234233] kauditd_printk_skb: 16 callbacks suppressed
[ +0.224985] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.092006] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.090717] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.460171] systemd-fstab-generator[1093]: Ignoring "noauto" for root device
[ +0.081044] systemd-fstab-generator[1104]: Ignoring "noauto" for root device
[ +2.991024] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
[ +0.466189] kauditd_printk_skb: 68 callbacks suppressed
[Jan14 11:06] systemd-fstab-generator[2009]: Ignoring "noauto" for root device
[ +12.288147] kauditd_printk_skb: 8 callbacks suppressed
[ +11.014225] kauditd_printk_skb: 18 callbacks suppressed
[ +4.097840] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
[ +0.157534] systemd-fstab-generator[3048]: Ignoring "noauto" for root device
[ +0.143509] systemd-fstab-generator[3059]: Ignoring "noauto" for root device
[ +17.238898] systemd-fstab-generator[4389]: Ignoring "noauto" for root device
[ +0.099215] systemd-fstab-generator[4443]: Ignoring "noauto" for root device
[Jan14 11:07] kauditd_printk_skb: 31 callbacks suppressed
[ +1.304783] systemd-fstab-generator[5886]: Ignoring "noauto" for root device
*
* ==> etcd [00896af5ccd6] <==
* {"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"db97d05830b4a428","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-14T11:07:05.150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 switched to configuration voters=(15823344892982371368)"}
{"level":"info","ts":"2023-01-14T11:07:05.150Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","added-peer-id":"db97d05830b4a428","added-peer-peer-urls":["https://192.168.64.24:2380"]}
{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f9c405dda3109066","local-member-id":"db97d05830b4a428","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T11:07:05.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-14T11:07:05.157Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"db97d05830b4a428","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"db97d05830b4a428","initial-advertise-peer-urls":["https://192.168.64.24:2380"],"listen-peer-urls":["https://192.168.64.24:2380"],"advertise-client-urls":["https://192.168.64.24:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.24:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-14T11:07:05.169Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-14T11:07:05.158Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.24:2380"}
{"level":"info","ts":"2023-01-14T11:07:05.170Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.24:2380"}
{"level":"info","ts":"2023-01-14T11:07:06.113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 is starting a new election at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgPreVoteResp from db97d05830b4a428 at term 3"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became candidate at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 received MsgVoteResp from db97d05830b4a428 at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"db97d05830b4a428 became leader at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: db97d05830b4a428 elected leader db97d05830b4a428 at term 4"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"db97d05830b4a428","local-member-attributes":"{Name:pause-030526 ClientURLs:[https://192.168.64.24:2379]}","request-path":"/0/members/db97d05830b4a428/attributes","cluster-id":"f9c405dda3109066","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-14T11:07:06.114Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-14T11:07:06.115Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.24:2379"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-14T11:07:06.116Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
*
* ==> etcd [1f0472740d8e] <==
*
*
* ==> kernel <==
* 11:07:31 up 2 min, 0 users, load average: 0.60, 0.29, 0.11
Linux pause-030526 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [ec5b05843edc] <==
*
*
* ==> kube-apiserver [fa0ae81988fe] <==
* I0114 11:07:07.835235 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 11:07:07.835321 1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 11:07:07.835731 1 autoregister_controller.go:141] Starting autoregister controller
I0114 11:07:07.835826 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0114 11:07:07.856121 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0114 11:07:07.856531 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0114 11:07:07.858292 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0114 11:07:07.858319 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0114 11:07:07.958455 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0114 11:07:08.030449 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0114 11:07:08.031216 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0114 11:07:08.032075 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0114 11:07:08.032931 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0114 11:07:08.035463 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0114 11:07:08.035912 1 cache.go:39] Caches are synced for autoregister controller
I0114 11:07:08.037457 1 shared_informer.go:262] Caches are synced for node_authorizer
I0114 11:07:08.631959 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0114 11:07:08.834884 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0114 11:07:09.433088 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0114 11:07:09.439315 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0114 11:07:09.467659 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0114 11:07:09.481246 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0114 11:07:09.492033 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0114 11:07:20.415432 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0114 11:07:20.584645 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [64b687a4b262] <==
* I0114 11:07:20.459506 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0114 11:07:20.462354 1 shared_informer.go:262] Caches are synced for expand
I0114 11:07:20.462369 1 shared_informer.go:262] Caches are synced for namespace
I0114 11:07:20.462460 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0114 11:07:20.465035 1 shared_informer.go:262] Caches are synced for ReplicationController
I0114 11:07:20.469843 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I0114 11:07:20.469889 1 shared_informer.go:262] Caches are synced for TTL
I0114 11:07:20.472294 1 shared_informer.go:262] Caches are synced for taint
I0114 11:07:20.472382 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0114 11:07:20.472447 1 taint_manager.go:209] "Sending events to api server"
I0114 11:07:20.472424 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0114 11:07:20.472799 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-030526. Assuming now as a timestamp.
I0114 11:07:20.472929 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0114 11:07:20.473272 1 event.go:294] "Event occurred" object="pause-030526" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-030526 event: Registered Node pause-030526 in Controller"
I0114 11:07:20.480533 1 shared_informer.go:262] Caches are synced for daemon sets
I0114 11:07:20.490072 1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
I0114 11:07:20.496927 1 shared_informer.go:262] Caches are synced for HPA
I0114 11:07:20.574770 1 shared_informer.go:262] Caches are synced for endpoint
I0114 11:07:20.589017 1 shared_informer.go:262] Caches are synced for disruption
I0114 11:07:20.591967 1 shared_informer.go:262] Caches are synced for resource quota
I0114 11:07:20.598516 1 shared_informer.go:262] Caches are synced for stateful set
I0114 11:07:20.621015 1 shared_informer.go:262] Caches are synced for resource quota
I0114 11:07:21.005895 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 11:07:21.066929 1 shared_informer.go:262] Caches are synced for garbage collector
I0114 11:07:21.067007 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-controller-manager [8cfdb196b142] <==
*
*
* ==> kube-proxy [3fdcdd87125f] <==
* I0114 11:07:09.764942 1 node.go:163] Successfully retrieved node IP: 192.168.64.24
I0114 11:07:09.765007 1 server_others.go:138] "Detected node IP" address="192.168.64.24"
I0114 11:07:09.765022 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0114 11:07:09.789595 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0114 11:07:09.789674 1 server_others.go:206] "Using iptables Proxier"
I0114 11:07:09.789705 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0114 11:07:09.789866 1 server.go:661] "Version info" version="v1.25.3"
I0114 11:07:09.789895 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:07:09.791171 1 config.go:317] "Starting service config controller"
I0114 11:07:09.791204 1 shared_informer.go:255] Waiting for caches to sync for service config
I0114 11:07:09.791233 1 config.go:226] "Starting endpoint slice config controller"
I0114 11:07:09.791257 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0114 11:07:09.792096 1 config.go:444] "Starting node config controller"
I0114 11:07:09.792122 1 shared_informer.go:255] Waiting for caches to sync for node config
I0114 11:07:09.892056 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0114 11:07:09.892223 1 shared_informer.go:262] Caches are synced for node config
I0114 11:07:09.892063 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [a91b8dbf52b2] <==
*
*
* ==> kube-scheduler [6bf08f44884c] <==
* I0114 11:07:05.785495 1 serving.go:348] Generated self-signed cert in-memory
W0114 11:07:07.930966 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0114 11:07:07.931087 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0114 11:07:07.931149 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 11:07:07.931342 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 11:07:07.946916 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 11:07:07.946999 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:07:07.947953 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 11:07:07.948063 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 11:07:07.949707 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:07:07.948083 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0114 11:07:07.964273 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0114 11:07:07.964466 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0114 11:07:07.964647 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0114 11:07:07.964697 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0114 11:07:07.964834 1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0114 11:07:07.964957 1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0114 11:07:08.050308 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [d1df9d20a995] <==
* I0114 11:06:52.791744 1 serving.go:348] Generated self-signed cert in-memory
W0114 11:06:53.280219 1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.64.24:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.64.24:8443: connect: connection refused
W0114 11:06:53.280234 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0114 11:06:53.280239 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0114 11:06:53.282436 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0114 11:06:53.282467 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0114 11:06:53.284472 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0114 11:06:53.284543 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0114 11:06:53.284551 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:06:53.284723 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0114 11:06:53.284834 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0114 11:06:53.284988 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0114 11:06:53.285446 1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0114 11:06:53.285486 1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0114 11:06:53.285807 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Journal begins at Sat 2023-01-14 11:05:33 UTC, ends at Sat 2023-01-14 11:07:32 UTC. --
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.304093 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.404883 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.505492 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.606226 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.706862 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: E0114 11:07:07.807067 5892 kubelet.go:2448] "Error getting node" err="node \"pause-030526\" not found"
Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.908125 5892 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 14 11:07:07 pause-030526 kubelet[5892]: I0114 11:07:07.909327 5892 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055387 5892 kubelet_node_status.go:108] "Node was previously registered" node="pause-030526"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.055555 5892 kubelet_node_status.go:73] "Successfully registered node" node="pause-030526"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.675696 5892 apiserver.go:52] "Watching apiserver"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677949 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.677999 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821148 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72x7q\" (UniqueName: \"kubernetes.io/projected/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-kube-api-access-72x7q\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821518 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-proxy\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821683 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eff0eea5-423e-4f30-9cc7-f0a187ccfbe4-config-volume\") pod \"coredns-565d847f94-wk8g2\" (UID: \"eff0eea5-423e-4f30-9cc7-f0a187ccfbe4\") " pod="kube-system/coredns-565d847f94-wk8g2"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821740 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-xtables-lock\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821852 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/937abbd6-9bb6-4df5-bda8-a01348c80cfa-lib-modules\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.821958 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt7j\" (UniqueName: \"kubernetes.io/projected/937abbd6-9bb6-4df5-bda8-a01348c80cfa-kube-api-access-zmt7j\") pod \"kube-proxy-9lkcj\" (UID: \"937abbd6-9bb6-4df5-bda8-a01348c80cfa\") " pod="kube-system/kube-proxy-9lkcj"
Jan 14 11:07:08 pause-030526 kubelet[5892]: I0114 11:07:08.822000 5892 reconciler.go:169] "Reconciler: start to sync state"
Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.578961 5892 scope.go:115] "RemoveContainer" containerID="a91b8dbf52b2899bfa63a86f3b29f268678711d37ac71fba7ef99acfabef6696"
Jan 14 11:07:09 pause-030526 kubelet[5892]: I0114 11:07:09.579108 5892 scope.go:115] "RemoveContainer" containerID="4ef492042630b948c5a7cf8834310194a4c1a14d0407a74904076077074843a0"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.345531 5892 topology_manager.go:205] "Topology Admit Handler"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451536 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14a8b558-cad1-44aa-8434-e31a93fcc6e0-tmp\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
Jan 14 11:07:24 pause-030526 kubelet[5892]: I0114 11:07:24.451687 5892 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxk6\" (UniqueName: \"kubernetes.io/projected/14a8b558-cad1-44aa-8434-e31a93fcc6e0-kube-api-access-rjxk6\") pod \"storage-provisioner\" (UID: \"14a8b558-cad1-44aa-8434-e31a93fcc6e0\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [4747fe303fd1] <==
* I0114 11:07:25.092946 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0114 11:07:25.101139 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0114 11:07:25.101183 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0114 11:07:25.105549 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0114 11:07:25.106105 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
I0114 11:07:25.107230 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"578576fd-279f-4e3d-946a-2f8e3400fd7a", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9 became leader
I0114 11:07:25.207006 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-030526_0329eba3-6dd1-4234-8e96-6a02360c4ff9!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-030526 -n pause-030526
helpers_test.go:261: (dbg) Run: kubectl --context pause-030526 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-030526 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-030526 describe pod : exit status 1 (39.488441ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-030526 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.51s)