=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-darwin-amd64 start -p pause-132406 --alsologtostderr -v=1 --driver=hyperkit
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-132406 --alsologtostderr -v=1 --driver=hyperkit : (48.437582205s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-132406] minikube v1.28.0 on Darwin 13.0.1
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
* Using the hyperkit driver based on existing profile
* Starting control plane node pause-132406 in cluster pause-132406
* Updating the running hyperkit "pause-132406" VM ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0108 13:24:59.144440 11017 out.go:296] Setting OutFile to fd 1 ...
I0108 13:24:59.144700 11017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:24:59.144706 11017 out.go:309] Setting ErrFile to fd 2...
I0108 13:24:59.144710 11017 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:24:59.144819 11017 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
I0108 13:24:59.145302 11017 out.go:303] Setting JSON to false
I0108 13:24:59.165166 11017 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5073,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0108 13:24:59.165262 11017 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0108 13:24:59.187636 11017 out.go:177] * [pause-132406] minikube v1.28.0 on Darwin 13.0.1
I0108 13:24:59.229531 11017 notify.go:220] Checking for updates...
I0108 13:24:59.250258 11017 out.go:177] - MINIKUBE_LOCATION=15565
I0108 13:24:59.271341 11017 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:24:59.292311 11017 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 13:24:59.313208 11017 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 13:24:59.334331 11017 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
I0108 13:24:59.355668 11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:24:59.356032 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:24:59.356077 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:24:59.363072 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52862
I0108 13:24:59.363470 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:24:59.363894 11017 main.go:134] libmachine: Using API Version 1
I0108 13:24:59.363904 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:24:59.364148 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:24:59.364248 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:24:59.364376 11017 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 13:24:59.364666 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:24:59.364695 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:24:59.371900 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52864
I0108 13:24:59.372300 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:24:59.372643 11017 main.go:134] libmachine: Using API Version 1
I0108 13:24:59.372655 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:24:59.372850 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:24:59.372953 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:24:59.400218 11017 out.go:177] * Using the hyperkit driver based on existing profile
I0108 13:24:59.421285 11017 start.go:294] selected driver: hyperkit
I0108 13:24:59.421306 11017 start.go:838] validating driver "hyperkit" against &{Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 13:24:59.421422 11017 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 13:24:59.421481 11017 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:24:59.421599 11017 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0108 13:24:59.428642 11017 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0108 13:24:59.432002 11017 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:24:59.432023 11017 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0108 13:24:59.434365 11017 cni.go:95] Creating CNI manager for ""
I0108 13:24:59.434384 11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 13:24:59.434399 11017 start_flags.go:317] config:
{Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 13:24:59.434556 11017 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:24:59.476204 11017 out.go:177] * Starting control plane node pause-132406 in cluster pause-132406
I0108 13:24:59.497317 11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 13:24:59.497392 11017 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0108 13:24:59.497432 11017 cache.go:57] Caching tarball of preloaded images
I0108 13:24:59.497617 11017 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 13:24:59.497637 11017 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0108 13:24:59.497768 11017 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/config.json ...
I0108 13:24:59.498436 11017 cache.go:193] Successfully downloaded all kic artifacts
I0108 13:24:59.498484 11017 start.go:364] acquiring machines lock for pause-132406: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0108 13:24:59.498571 11017 start.go:368] acquired machines lock for "pause-132406" in 69.225µs
I0108 13:24:59.498618 11017 start.go:96] Skipping create...Using existing machine configuration
I0108 13:24:59.498629 11017 fix.go:55] fixHost starting:
I0108 13:24:59.499114 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:24:59.499149 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:24:59.506673 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52866
I0108 13:24:59.507029 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:24:59.507358 11017 main.go:134] libmachine: Using API Version 1
I0108 13:24:59.507369 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:24:59.507578 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:24:59.507683 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:24:59.507764 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:24:59.507847 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:24:59.507930 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:24:59.508870 11017 fix.go:103] recreateIfNeeded on pause-132406: state=Running err=<nil>
W0108 13:24:59.508885 11017 fix.go:129] unexpected machine state, will restart: <nil>
I0108 13:24:59.553275 11017 out.go:177] * Updating the running hyperkit "pause-132406" VM ...
I0108 13:24:59.574311 11017 machine.go:88] provisioning docker machine ...
I0108 13:24:59.574343 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:24:59.574546 11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
I0108 13:24:59.574663 11017 buildroot.go:166] provisioning hostname "pause-132406"
I0108 13:24:59.574678 11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
I0108 13:24:59.574785 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:24:59.574903 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:24:59.575002 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.575111 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.575214 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:24:59.575380 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:24:59.575607 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:24:59.575620 11017 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-132406 && echo "pause-132406" | sudo tee /etc/hostname
I0108 13:24:59.660545 11017 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-132406
I0108 13:24:59.660565 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:24:59.660757 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:24:59.660900 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.661028 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.661184 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:24:59.661369 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:24:59.661538 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:24:59.661551 11017 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-132406' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-132406/g' /etc/hosts;
else
echo '127.0.1.1 pause-132406' | sudo tee -a /etc/hosts;
fi
fi
I0108 13:24:59.731442 11017 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 13:24:59.731463 11017 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-3013/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-3013/.minikube}
I0108 13:24:59.731476 11017 buildroot.go:174] setting up certificates
I0108 13:24:59.731493 11017 provision.go:83] configureAuth start
I0108 13:24:59.731509 11017 main.go:134] libmachine: (pause-132406) Calling .GetMachineName
I0108 13:24:59.731656 11017 main.go:134] libmachine: (pause-132406) Calling .GetIP
I0108 13:24:59.731755 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:24:59.731862 11017 provision.go:138] copyHostCerts
I0108 13:24:59.731961 11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem, removing ...
I0108 13:24:59.731972 11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem
I0108 13:24:59.732122 11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.pem (1082 bytes)
I0108 13:24:59.732354 11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem, removing ...
I0108 13:24:59.732367 11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem
I0108 13:24:59.732482 11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/cert.pem (1123 bytes)
I0108 13:24:59.732719 11017 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem, removing ...
I0108 13:24:59.732728 11017 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem
I0108 13:24:59.732794 11017 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-3013/.minikube/key.pem (1675 bytes)
I0108 13:24:59.732941 11017 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem org=jenkins.pause-132406 san=[192.168.64.27 192.168.64.27 localhost 127.0.0.1 minikube pause-132406]
I0108 13:24:59.847140 11017 provision.go:172] copyRemoteCerts
I0108 13:24:59.847204 11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0108 13:24:59.847222 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:24:59.847387 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:24:59.847466 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.847559 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:24:59.847641 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:24:59.888266 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0108 13:24:59.905918 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I0108 13:24:59.923973 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0108 13:24:59.942131 11017 provision.go:86] duration metric: configureAuth took 210.621055ms
I0108 13:24:59.942145 11017 buildroot.go:189] setting minikube options for container-runtime
I0108 13:24:59.942323 11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:24:59.942338 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:24:59.942477 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:24:59.942575 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:24:59.942665 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.942753 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:24:59.942866 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:24:59.943017 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:24:59.943143 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:24:59.943153 11017 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0108 13:25:00.012844 11017 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0108 13:25:00.012862 11017 buildroot.go:70] root file system type: tmpfs
I0108 13:25:00.013031 11017 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0108 13:25:00.013062 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.013209 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.013317 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.013432 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.013556 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.013811 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:25:00.013985 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:25:00.014047 11017 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0108 13:25:00.093902 11017 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0108 13:25:00.093932 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.094100 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.094195 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.094319 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.094441 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.094615 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:25:00.094739 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:25:00.094752 11017 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0108 13:25:00.167191 11017 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0108 13:25:00.167204 11017 machine.go:91] provisioned docker machine in 592.87714ms
I0108 13:25:00.167215 11017 start.go:300] post-start starting for "pause-132406" (driver="hyperkit")
I0108 13:25:00.167223 11017 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0108 13:25:00.167235 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:00.167462 11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0108 13:25:00.167476 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.167587 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.167685 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.167791 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.167893 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:00.210430 11017 ssh_runner.go:195] Run: cat /etc/os-release
I0108 13:25:00.213362 11017 info.go:137] Remote host: Buildroot 2021.02.12
I0108 13:25:00.213376 11017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3013/.minikube/addons for local assets ...
I0108 13:25:00.213473 11017 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-3013/.minikube/files for local assets ...
I0108 13:25:00.213637 11017 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem -> 42012.pem in /etc/ssl/certs
I0108 13:25:00.213816 11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0108 13:25:00.219994 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem --> /etc/ssl/certs/42012.pem (1708 bytes)
I0108 13:25:00.238777 11017 start.go:303] post-start completed in 71.550985ms
I0108 13:25:00.238794 11017 fix.go:57] fixHost completed within 740.163594ms
I0108 13:25:00.238809 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.238949 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.239043 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.239154 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.239244 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.239373 11017 main.go:134] libmachine: Using SSH client type: native
I0108 13:25:00.239484 11017 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec4a0] 0x13ef620 <nil> [] 0s} 192.168.64.27 22 <nil> <nil>}
I0108 13:25:00.239492 11017 main.go:134] libmachine: About to run SSH command:
date +%s.%N
I0108 13:25:00.308816 11017 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673213100.379604899
I0108 13:25:00.308836 11017 fix.go:207] guest clock: 1673213100.379604899
I0108 13:25:00.308845 11017 fix.go:220] Guest: 2023-01-08 13:25:00.379604899 -0800 PST Remote: 2023-01-08 13:25:00.238797 -0800 PST m=+1.144074135 (delta=140.807899ms)
I0108 13:25:00.308870 11017 fix.go:191] guest clock delta is within tolerance: 140.807899ms
I0108 13:25:00.308874 11017 start.go:83] releasing machines lock for "pause-132406", held for 810.289498ms
I0108 13:25:00.308891 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:00.309018 11017 main.go:134] libmachine: (pause-132406) Calling .GetIP
I0108 13:25:00.309095 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:00.309441 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:00.309566 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:00.309668 11017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0108 13:25:00.309712 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.309763 11017 ssh_runner.go:195] Run: cat /version.json
I0108 13:25:00.309785 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:00.309831 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.309903 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:00.309978 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.310053 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:00.310076 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.310168 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:00.310203 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:00.310364 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:00.348771 11017 ssh_runner.go:195] Run: systemctl --version
I0108 13:25:00.388455 11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 13:25:00.388568 11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 13:25:00.405608 11017 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 13:25:00.405629 11017 docker.go:543] Images already preloaded, skipping extraction
I0108 13:25:00.405741 11017 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0108 13:25:00.416067 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0108 13:25:00.427519 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0108 13:25:00.437606 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0108 13:25:00.455265 11017 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0108 13:25:00.602128 11017 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0108 13:25:00.735308 11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 13:25:00.866261 11017 ssh_runner.go:195] Run: sudo systemctl restart docker
I0108 13:25:17.774785 11017 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.90843721s)
I0108 13:25:17.774853 11017 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0108 13:25:17.875458 11017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0108 13:25:17.976112 11017 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0108 13:25:17.984898 11017 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0108 13:25:17.984974 11017 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0108 13:25:17.988660 11017 start.go:472] Will wait 60s for crictl version
I0108 13:25:17.988708 11017 ssh_runner.go:195] Run: sudo crictl version
I0108 13:25:18.011758 11017 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.21
RuntimeApiVersion: 1.41.0
I0108 13:25:18.011842 11017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 13:25:18.031040 11017 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0108 13:25:18.095550 11017 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.21 ...
I0108 13:25:18.095681 11017 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0108 13:25:18.098401 11017 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 13:25:18.098471 11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 13:25:18.114724 11017 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 13:25:18.114737 11017 docker.go:543] Images already preloaded, skipping extraction
I0108 13:25:18.114829 11017 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0108 13:25:18.130914 11017 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0108 13:25:18.130933 11017 cache_images.go:84] Images are preloaded, skipping loading
I0108 13:25:18.131029 11017 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0108 13:25:18.151682 11017 cni.go:95] Creating CNI manager for ""
I0108 13:25:18.151699 11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 13:25:18.151718 11017 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0108 13:25:18.151734 11017 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.27 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-132406 NodeName:pause-132406 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.27"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.64.27 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
I0108 13:25:18.151826 11017 kubeadm.go:163] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.27
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-132406"
kubeletExtraArgs:
node-ip: 192.168.64.27
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.27"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0108 13:25:18.151903 11017 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-132406 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.27 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0108 13:25:18.151975 11017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I0108 13:25:18.157772 11017 binaries.go:44] Found k8s binaries, skipping transfer
I0108 13:25:18.157827 11017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0108 13:25:18.163442 11017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I0108 13:25:18.174575 11017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0108 13:25:18.185567 11017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2037 bytes)
I0108 13:25:18.196553 11017 ssh_runner.go:195] Run: grep 192.168.64.27 control-plane.minikube.internal$ /etc/hosts
I0108 13:25:18.198959 11017 certs.go:54] Setting up /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406 for IP: 192.168.64.27
I0108 13:25:18.199061 11017 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.key
I0108 13:25:18.199112 11017 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.key
I0108 13:25:18.199198 11017 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key
I0108 13:25:18.199262 11017 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.key.e04425f9
I0108 13:25:18.199314 11017 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.key
I0108 13:25:18.199539 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201.pem (1338 bytes)
W0108 13:25:18.199577 11017 certs.go:384] ignoring /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201_empty.pem, impossibly tiny 0 bytes
I0108 13:25:18.199589 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca-key.pem (1679 bytes)
I0108 13:25:18.199629 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem (1082 bytes)
I0108 13:25:18.199665 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem (1123 bytes)
I0108 13:25:18.199700 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/certs/key.pem (1675 bytes)
I0108 13:25:18.199772 11017 certs.go:388] found cert: /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem (1708 bytes)
I0108 13:25:18.200282 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0108 13:25:18.216490 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0108 13:25:18.232771 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0108 13:25:18.248769 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0108 13:25:18.264640 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0108 13:25:18.281114 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0108 13:25:18.297547 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0108 13:25:18.313498 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0108 13:25:18.329567 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/files/etc/ssl/certs/42012.pem --> /usr/share/ca-certificates/42012.pem (1708 bytes)
I0108 13:25:18.346166 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0108 13:25:18.362324 11017 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/4201.pem --> /usr/share/ca-certificates/4201.pem (1338 bytes)
I0108 13:25:18.378266 11017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0108 13:25:18.389409 11017 ssh_runner.go:195] Run: openssl version
I0108 13:25:18.392867 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0108 13:25:18.399372 11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0108 13:25:18.402423 11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 8 20:28 /usr/share/ca-certificates/minikubeCA.pem
I0108 13:25:18.402468 11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0108 13:25:18.405927 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0108 13:25:18.411551 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4201.pem && ln -fs /usr/share/ca-certificates/4201.pem /etc/ssl/certs/4201.pem"
I0108 13:25:18.418019 11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4201.pem
I0108 13:25:18.420869 11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 8 20:32 /usr/share/ca-certificates/4201.pem
I0108 13:25:18.420912 11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4201.pem
I0108 13:25:18.424373 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4201.pem /etc/ssl/certs/51391683.0"
I0108 13:25:18.429820 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42012.pem && ln -fs /usr/share/ca-certificates/42012.pem /etc/ssl/certs/42012.pem"
I0108 13:25:18.436377 11017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42012.pem
I0108 13:25:18.439330 11017 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 8 20:32 /usr/share/ca-certificates/42012.pem
I0108 13:25:18.439378 11017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42012.pem
I0108 13:25:18.442990 11017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/42012.pem /etc/ssl/certs/3ec20f2e.0"
I0108 13:25:18.448565 11017 kubeadm.go:396] StartCluster: {Name:pause-132406 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.25.3 ClusterName:pause-132406 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 13:25:18.448668 11017 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 13:25:18.464154 11017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0108 13:25:18.470018 11017 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I0108 13:25:18.470031 11017 kubeadm.go:627] restartCluster start
I0108 13:25:18.470077 11017 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0108 13:25:18.475709 11017 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0108 13:25:18.476155 11017 kubeconfig.go:92] found "pause-132406" server: "https://192.168.64.27:8443"
I0108 13:25:18.476799 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:18.477292 11017 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0108 13:25:18.482587 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:18.482702 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:18.490037 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:18.691070 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:18.691265 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:18.700943 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:18.891391 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:18.891526 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:18.900676 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:19.090747 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:19.090861 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:19.100097 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:19.290255 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:19.290417 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:19.299741 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:19.491515 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:19.491687 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:19.501195 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:19.690187 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:19.690275 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:19.698750 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:19.890980 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:19.891134 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:19.900481 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:20.092180 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:20.092380 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:20.101621 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:20.292174 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:20.292382 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:20.301493 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:20.490195 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:20.490362 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:20.499831 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:20.690991 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:20.691080 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:20.707843 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:20.890131 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:20.890220 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:20.938154 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:21.090723 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:21.090847 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:21.106274 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:21.290401 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:21.290470 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:21.312786 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:21.490334 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:21.490415 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:21.506185 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:21.506197 11017 api_server.go:165] Checking apiserver status ...
I0108 13:25:21.506257 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0108 13:25:21.527352 11017 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0108 13:25:21.527365 11017 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I0108 13:25:21.527375 11017 kubeadm.go:1114] stopping kube-system containers ...
I0108 13:25:21.527449 11017 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0108 13:25:21.586435 11017 docker.go:444] Stopping containers: [b3ea39090c67 a59e122b43f1 82b65485dbb4 d4f72481538e b17d288e92ab b5535145a6cf 80b9970570ee d0c6f1675c8d f879ee821d6d 592c899764e8 9dd26d98b44d 2fe730d9855f 8bc92a9a48c4 ae84ec1a64cd 5b4f6217121e 77e4c35247cc bded6cef9bbf c2ddc4b3adc5 0a13f0225a4c fb47cc5b476c 11b52ad80c15 9572a2db0191 fa94638cc4fa bf33cb6c0c18 40e688e290e3 685db2b6dfc6 2a8b711bcdda]
I0108 13:25:21.586530 11017 ssh_runner.go:195] Run: docker stop b3ea39090c67 a59e122b43f1 82b65485dbb4 d4f72481538e b17d288e92ab b5535145a6cf 80b9970570ee d0c6f1675c8d f879ee821d6d 592c899764e8 9dd26d98b44d 2fe730d9855f 8bc92a9a48c4 ae84ec1a64cd 5b4f6217121e 77e4c35247cc bded6cef9bbf c2ddc4b3adc5 0a13f0225a4c fb47cc5b476c 11b52ad80c15 9572a2db0191 fa94638cc4fa bf33cb6c0c18 40e688e290e3 685db2b6dfc6 2a8b711bcdda
I0108 13:25:22.415742 11017 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0108 13:25:22.469898 11017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0108 13:25:22.477303 11017 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Jan 8 21:24 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jan 8 21:24 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Jan 8 21:24 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jan 8 21:24 /etc/kubernetes/scheduler.conf
I0108 13:25:22.477369 11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0108 13:25:22.497837 11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0108 13:25:22.510243 11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0108 13:25:22.519649 11017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0108 13:25:22.519713 11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0108 13:25:22.529569 11017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0108 13:25:22.547255 11017 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0108 13:25:22.547314 11017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0108 13:25:22.556086 11017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0108 13:25:22.562336 11017 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0108 13:25:22.562348 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:22.629479 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:23.328667 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:23.475730 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:23.517443 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:23.562971 11017 api_server.go:51] waiting for apiserver process to appear ...
I0108 13:25:23.563040 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:24.078015 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:24.576987 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:25.077412 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:25.089843 11017 api_server.go:71] duration metric: took 1.526868359s to wait for apiserver process to appear ...
I0108 13:25:25.089860 11017 api_server.go:87] waiting for apiserver healthz status ...
I0108 13:25:25.089870 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:28.649822 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0108 13:25:28.649840 11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0108 13:25:29.150741 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:29.171836 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0108 13:25:29.171851 11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0108 13:25:29.651373 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:29.656482 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0108 13:25:29.656499 11017 api_server.go:102] status: https://192.168.64.27:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0108 13:25:30.150465 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:30.154853 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
ok
I0108 13:25:30.160641 11017 api_server.go:140] control plane version: v1.25.3
I0108 13:25:30.160657 11017 api_server.go:130] duration metric: took 5.070772061s to wait for apiserver health ...
I0108 13:25:30.160672 11017 cni.go:95] Creating CNI manager for ""
I0108 13:25:30.160687 11017 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 13:25:30.160698 11017 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 13:25:30.167315 11017 system_pods.go:59] 6 kube-system pods found
I0108 13:25:30.167330 11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:30.167336 11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0108 13:25:30.167341 11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0108 13:25:30.167347 11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0108 13:25:30.167352 11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0108 13:25:30.167357 11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0108 13:25:30.167361 11017 system_pods.go:74] duration metric: took 6.65801ms to wait for pod list to return data ...
I0108 13:25:30.167367 11017 node_conditions.go:102] verifying NodePressure condition ...
I0108 13:25:30.173959 11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0108 13:25:30.173980 11017 node_conditions.go:123] node cpu capacity is 2
I0108 13:25:30.173991 11017 node_conditions.go:105] duration metric: took 6.621227ms to run NodePressure ...
I0108 13:25:30.174004 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0108 13:25:30.356109 11017 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I0108 13:25:30.359685 11017 kubeadm.go:778] kubelet initialised
I0108 13:25:30.359697 11017 kubeadm.go:779] duration metric: took 3.573087ms waiting for restarted kubelet to initialise ...
I0108 13:25:30.359703 11017 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:30.363076 11017 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:30.369851 11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:30.369861 11017 pod_ready.go:81] duration metric: took 6.774365ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:30.369869 11017 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:32.380838 11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:34.381340 11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:36.881913 11017 pod_ready.go:102] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:38.882255 11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:38.882271 11017 pod_ready.go:81] duration metric: took 8.51236508s waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:38.882277 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:40.902617 11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:43.390748 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.390761 11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.390767 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400557 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.400568 11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400574 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403156 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.403166 11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403174 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411451 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.411465 11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411472 11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.411481 11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0108 13:25:44.418863 11017 ops.go:34] apiserver oom_adj: -16
I0108 13:25:44.418873 11017 kubeadm.go:631] restartCluster took 25.948745972s
I0108 13:25:44.418878 11017 kubeadm.go:398] StartCluster complete in 25.970227663s
I0108 13:25:44.418886 11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.418977 11017 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:25:44.419424 11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.419963 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.421604 11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
I0108 13:25:44.421632 11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 13:25:44.421642 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0108 13:25:44.421664 11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0108 13:25:44.464718 11017 out.go:177] * Verifying Kubernetes components...
I0108 13:25:44.464758 11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
I0108 13:25:44.485523 11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
I0108 13:25:44.464765 11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
I0108 13:25:44.421794 11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:25:44.475539 11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
W0108 13:25:44.485550 11017 addons.go:236] addon storage-provisioner should already be in state true
I0108 13:25:44.485556 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:44.485552 11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
I0108 13:25:44.485605 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.485877 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485890 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485895 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.485909 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.493358 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
I0108 13:25:44.493704 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
I0108 13:25:44.493766 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494093 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494097 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494108 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494325 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.494420 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.494437 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494450 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494517 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.494607 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.494633 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.495008 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.495031 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.496752 11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
I0108 13:25:44.497335 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.498930 11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
I0108 13:25:44.498941 11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
I0108 13:25:44.498947 11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.499578 11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
W0108 13:25:44.499589 11017 addons.go:236] addon default-storageclass should already be in state true
I0108 13:25:44.499606 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.499869 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.499888 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.502432 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
I0108 13:25:44.502793 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.503162 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.503182 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.503337 11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.503425 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.503542 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.503638 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.503741 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.505184 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.526650 11017 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0108 13:25:44.507300 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
I0108 13:25:44.527054 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.547766 11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.547777 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0108 13:25:44.547790 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.547909 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.548050 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.548063 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.548095 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.548190 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.548274 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.548290 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.548649 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.548675 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.555825 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
I0108 13:25:44.556201 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.556573 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.556585 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.556785 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.556890 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.556978 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.557074 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.558022 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.558187 11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.558196 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0108 13:25:44.558205 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.558288 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.558385 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.558470 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.558547 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.587109 11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.587119 11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.587128 11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.599174 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.609018 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.988772 11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.988783 11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.988791 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.186660 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186678 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186841 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186866 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186869 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.186878 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.186883 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.186898 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186912 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187089 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187104 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187114 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187131 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187115 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187146 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187125 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187159 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187192 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187230 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187349 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187402 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187401 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187428 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187436 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.245840 11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0108 13:25:45.282957 11017 addons.go:488] enableAddons completed in 861.279533ms
I0108 13:25:45.388516 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.388528 11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.388537 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787890 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.787901 11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787908 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187439 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.187453 11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187459 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588219 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.588232 11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588239 11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:46.588288 11017 api_server.go:51] waiting for apiserver process to appear ...
I0108 13:25:46.588361 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:46.598144 11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
I0108 13:25:46.598158 11017 api_server.go:87] waiting for apiserver healthz status ...
I0108 13:25:46.598165 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:46.602085 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
ok
I0108 13:25:46.602639 11017 api_server.go:140] control plane version: v1.25.3
I0108 13:25:46.602648 11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
I0108 13:25:46.602654 11017 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 13:25:46.791503 11017 system_pods.go:59] 7 kube-system pods found
I0108 13:25:46.791521 11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:46.791526 11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:46.791529 11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:46.791533 11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:46.791538 11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:46.791542 11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:46.791550 11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0108 13:25:46.791556 11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
I0108 13:25:46.791561 11017 default_sa.go:34] waiting for default service account to be created ...
I0108 13:25:46.988179 11017 default_sa.go:45] found service account: "default"
I0108 13:25:46.988192 11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
I0108 13:25:46.988197 11017 system_pods.go:116] waiting for k8s-apps to be running ...
I0108 13:25:47.191037 11017 system_pods.go:86] 7 kube-system pods found
I0108 13:25:47.191051 11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:47.191056 11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:47.191059 11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:47.191062 11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:47.191068 11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:47.191071 11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:47.191075 11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
I0108 13:25:47.191079 11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
I0108 13:25:47.191083 11017 system_svc.go:44] waiting for kubelet service to be running ....
I0108 13:25:47.191143 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:47.200849 11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
I0108 13:25:47.200862 11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0108 13:25:47.200873 11017 node_conditions.go:102] verifying NodePressure condition ...
I0108 13:25:47.388983 11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0108 13:25:47.388998 11017 node_conditions.go:123] node cpu capacity is 2
I0108 13:25:47.389006 11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
I0108 13:25:47.389012 11017 start.go:217] waiting for startup goroutines ...
I0108 13:25:47.389347 11017 ssh_runner.go:195] Run: rm -f paused
I0108 13:25:47.433718 11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0108 13:25:47.476634 11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-132406 -n pause-132406
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-132406 logs -n 25
E0108 13:25:50.418521 4201 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/functional-123219/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-132406 logs -n 25: (2.859341467s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| delete | -p force-systemd-flag-131733 | force-systemd-flag-131733 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| start | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | docker-flags-131736 ssh | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-131736 ssh | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-131736 | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| start | -p cert-options-131823 | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:19 PST |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | cert-options-131823 ssh | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-131823 -- sudo | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-131823 | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| start | -p running-upgrade-131911 | running-upgrade-131911 | jenkins | v1.28.0 | 08 Jan 23 13:20 PST | 08 Jan 23 13:21 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p running-upgrade-131911 | running-upgrade-131911 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:21 PST |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
| stop | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:23 PST |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:23 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:24 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
| start | -p pause-132406 --memory=2048 | pause-132406 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p stopped-upgrade-132230 | stopped-upgrade-132230 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-132406 | pause-132406 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p stopped-upgrade-132230 | stopped-upgrade-132230 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | 08 Jan 23 13:25 PST |
| start | -p NoKubernetes-132541 | NoKubernetes-132541 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p NoKubernetes-132541 | NoKubernetes-132541 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | |
| | --driver=hyperkit | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/08 13:25:41
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.19.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0108 13:25:41.864570 11086 out.go:296] Setting OutFile to fd 1 ...
I0108 13:25:41.864753 11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:25:41.864756 11086 out.go:309] Setting ErrFile to fd 2...
I0108 13:25:41.864759 11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:25:41.864887 11086 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
I0108 13:25:41.865398 11086 out.go:303] Setting JSON to false
I0108 13:25:41.884483 11086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5115,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0108 13:25:41.884582 11086 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0108 13:25:41.906843 11086 out.go:177] * [NoKubernetes-132541] minikube v1.28.0 on Darwin 13.0.1
I0108 13:25:41.948550 11086 notify.go:220] Checking for updates...
I0108 13:25:41.970851 11086 out.go:177] - MINIKUBE_LOCATION=15565
I0108 13:25:41.992551 11086 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:25:42.013641 11086 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 13:25:42.034777 11086 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 13:25:42.056742 11086 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
I0108 13:25:42.079417 11086 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:25:42.079463 11086 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 13:25:42.107801 11086 out.go:177] * Using the hyperkit driver based on user configuration
I0108 13:25:42.149620 11086 start.go:294] selected driver: hyperkit
I0108 13:25:42.149635 11086 start.go:838] validating driver "hyperkit" against <nil>
I0108 13:25:42.149659 11086 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 13:25:42.149782 11086 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:25:42.150003 11086 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0108 13:25:42.158308 11086 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0108 13:25:42.161836 11086 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:42.161854 11086 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0108 13:25:42.161898 11086 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I0108 13:25:42.164327 11086 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
I0108 13:25:42.164430 11086 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
I0108 13:25:42.164452 11086 cni.go:95] Creating CNI manager for ""
I0108 13:25:42.164459 11086 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 13:25:42.164468 11086 start_flags.go:317] config:
{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 13:25:42.164581 11086 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:25:42.222410 11086 out.go:177] * Starting control plane node NoKubernetes-132541 in cluster NoKubernetes-132541
I0108 13:25:42.259872 11086 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 13:25:42.260043 11086 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0108 13:25:42.260082 11086 cache.go:57] Caching tarball of preloaded images
I0108 13:25:42.260294 11086 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 13:25:42.260312 11086 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0108 13:25:42.260462 11086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json ...
I0108 13:25:42.260510 11086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json: {Name:mkb313010fa03f74b48c17380336d5ac233d014a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:42.261062 11086 cache.go:193] Successfully downloaded all kic artifacts
I0108 13:25:42.261110 11086 start.go:364] acquiring machines lock for NoKubernetes-132541: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0108 13:25:42.261283 11086 start.go:368] acquired machines lock for "NoKubernetes-132541" in 157.235µs
I0108 13:25:42.261346 11086 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 13:25:42.261435 11086 start.go:125] createHost starting for "" (driver="hyperkit")
I0108 13:25:40.902617 11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:43.390748 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.390761 11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.390767 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400557 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.400568 11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400574 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403156 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.403166 11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403174 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411451 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.411465 11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411472 11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.411481 11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0108 13:25:44.418863 11017 ops.go:34] apiserver oom_adj: -16
I0108 13:25:44.418873 11017 kubeadm.go:631] restartCluster took 25.948745972s
I0108 13:25:44.418878 11017 kubeadm.go:398] StartCluster complete in 25.970227663s
I0108 13:25:44.418886 11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.418977 11017 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:25:44.419424 11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.419963 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.421604 11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
I0108 13:25:44.421632 11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 13:25:44.421642 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0108 13:25:44.421664 11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0108 13:25:44.464718 11017 out.go:177] * Verifying Kubernetes components...
I0108 13:25:44.464758 11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
I0108 13:25:44.485523 11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
I0108 13:25:44.464765 11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
I0108 13:25:44.421794 11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:25:44.475539 11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
W0108 13:25:44.485550 11017 addons.go:236] addon storage-provisioner should already be in state true
I0108 13:25:44.485556 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:44.485552 11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
I0108 13:25:44.485605 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.485877 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485890 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485895 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.485909 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.493358 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
I0108 13:25:44.493704 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
I0108 13:25:44.493766 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494093 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494097 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494108 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494325 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.494420 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.494437 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494450 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494517 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.494607 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.494633 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.495008 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.495031 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.496752 11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
I0108 13:25:44.497335 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.498930 11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
I0108 13:25:44.498941 11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
I0108 13:25:44.498947 11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.499578 11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
W0108 13:25:44.499589 11017 addons.go:236] addon default-storageclass should already be in state true
I0108 13:25:44.499606 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.499869 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.499888 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.502432 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
I0108 13:25:44.502793 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.503162 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.503182 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.503337 11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.503425 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.503542 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.503638 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.503741 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.505184 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.526650 11017 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0108 13:25:44.507300 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
I0108 13:25:44.527054 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.547766 11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.547777 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0108 13:25:44.547790 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.547909 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.548050 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.548063 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.548095 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.548190 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.548274 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.548290 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.548649 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.548675 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.555825 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
I0108 13:25:44.556201 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.556573 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.556585 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.556785 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.556890 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.556978 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.557074 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.558022 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.558187 11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.558196 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0108 13:25:44.558205 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.558288 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.558385 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.558470 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.558547 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.587109 11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.587119 11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.587128 11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.599174 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.609018 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.988772 11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.988783 11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.988791 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.186660 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186678 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186841 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186866 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186869 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.186878 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.186883 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.186898 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186912 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187089 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187104 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187114 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187131 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187115 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187146 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187125 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187159 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187192 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187230 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187349 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187402 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187401 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187428 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187436 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.245840 11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0108 13:25:42.303634 11086 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I0108 13:25:42.304124 11086 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:42.304196 11086 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:42.312348 11086 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52923
I0108 13:25:42.312703 11086 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:42.313112 11086 main.go:134] libmachine: Using API Version 1
I0108 13:25:42.313119 11086 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:42.313353 11086 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:42.313462 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetMachineName
I0108 13:25:42.313550 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .DriverName
I0108 13:25:42.313649 11086 start.go:159] libmachine.API.Create for "NoKubernetes-132541" (driver="hyperkit")
I0108 13:25:42.313676 11086 client.go:168] LocalClient.Create starting
I0108 13:25:42.313716 11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem
I0108 13:25:42.313761 11086 main.go:134] libmachine: Decoding PEM data...
I0108 13:25:42.313774 11086 main.go:134] libmachine: Parsing certificate...
I0108 13:25:42.313838 11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem
I0108 13:25:42.313868 11086 main.go:134] libmachine: Decoding PEM data...
I0108 13:25:42.313878 11086 main.go:134] libmachine: Parsing certificate...
I0108 13:25:42.313894 11086 main.go:134] libmachine: Running pre-create checks...
I0108 13:25:42.313905 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .PreCreateCheck
I0108 13:25:42.313976 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.314125 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetConfigRaw
I0108 13:25:42.314532 11086 main.go:134] libmachine: Creating machine...
I0108 13:25:42.314538 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .Create
I0108 13:25:42.314603 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.314731 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.314597 11094 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15565-3013/.minikube
I0108 13:25:42.314792 11086 main.go:134] libmachine: (NoKubernetes-132541) Downloading /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15565-3013/.minikube/cache/iso/amd64/minikube-v1.28.0-1673190013-15565-amd64.iso...
I0108 13:25:42.460398 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.460335 11094 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/id_rsa...
I0108 13:25:42.503141 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503046 11094 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk...
I0108 13:25:42.503157 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing magic tar header
I0108 13:25:42.503171 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing SSH key tar header
I0108 13:25:42.503539 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503489 11094 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541 ...
I0108 13:25:42.653730 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.653744 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid
I0108 13:25:42.653784 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Using UUID 014f6508-8f9b-11ed-91e7-149d997fca88
I0108 13:25:42.678089 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Generated MAC 4e:f0:b3:1f:f:2b
I0108 13:25:42.678104 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541
I0108 13:25:42.678131 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0108 13:25:42.678168 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0108 13:25:42.678217 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "014f6508-8f9b-11ed-91e7-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/m
inikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"}
I0108 13:25:42.678253 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 014f6508-8f9b-11ed-91e7-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes
-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"
I0108 13:25:42.678262 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0108 13:25:42.679546 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Pid is 11097
I0108 13:25:42.679869 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 0
I0108 13:25:42.679885 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.679938 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:42.680931 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:42.681019 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:42.681065 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:42.681091 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:42.681107 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:42.681116 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:42.681130 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:42.681144 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:42.681154 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:42.681167 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:42.681181 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:42.681193 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:42.681203 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:42.681218 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:42.681228 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:42.681241 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:42.681253 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:42.681264 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:42.681275 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:42.681297 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:42.681310 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:42.681320 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:42.681333 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:42.681343 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:42.681355 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:42.681370 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:42.681383 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:42.681398 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:42.686391 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0108 13:25:42.695558 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0108 13:25:42.696153 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0108 13:25:42.696174 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0108 13:25:42.696188 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0108 13:25:42.696199 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0108 13:25:43.257820 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0108 13:25:43.257832 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0108 13:25:43.362873 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0108 13:25:43.362883 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0108 13:25:43.362890 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0108 13:25:43.362899 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0108 13:25:43.363783 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0108 13:25:43.363789 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0108 13:25:44.682748 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 1
I0108 13:25:44.682757 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.682837 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:44.684384 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:44.684451 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:44.684458 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:44.684475 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:44.684481 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:44.684487 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:44.684492 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:44.684504 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:44.684509 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:44.684516 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:44.684521 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:44.684527 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:44.684533 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:44.684541 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:44.684548 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:44.684554 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:44.684559 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:44.684578 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:44.684588 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:44.684597 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:44.684604 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:44.684610 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:44.684619 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:44.684625 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:44.684632 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:44.684637 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:44.684642 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:44.684651 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:46.686543 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 2
I0108 13:25:46.686557 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:46.686628 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:46.687410 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:46.687458 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:46.687466 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:46.687481 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:46.687487 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:46.687494 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:46.687499 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:46.687509 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:46.687514 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:46.687521 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:46.687526 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:46.687532 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:46.687540 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:46.687545 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:46.687551 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:46.687558 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:46.687564 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:46.687569 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:46.687584 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:46.687591 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:46.687599 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:46.687607 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:46.687612 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:46.687617 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:46.687623 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:46.687629 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:46.687636 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:46.687643 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:45.282957 11017 addons.go:488] enableAddons completed in 861.279533ms
I0108 13:25:45.388516 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.388528 11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.388537 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787890 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.787901 11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787908 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187439 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.187453 11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187459 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588219 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.588232 11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588239 11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:46.588288 11017 api_server.go:51] waiting for apiserver process to appear ...
I0108 13:25:46.588361 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:46.598144 11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
I0108 13:25:46.598158 11017 api_server.go:87] waiting for apiserver healthz status ...
I0108 13:25:46.598165 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:46.602085 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
ok
I0108 13:25:46.602639 11017 api_server.go:140] control plane version: v1.25.3
I0108 13:25:46.602648 11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
I0108 13:25:46.602654 11017 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 13:25:46.791503 11017 system_pods.go:59] 7 kube-system pods found
I0108 13:25:46.791521 11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:46.791526 11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:46.791529 11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:46.791533 11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:46.791538 11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:46.791542 11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:46.791550 11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0108 13:25:46.791556 11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
I0108 13:25:46.791561 11017 default_sa.go:34] waiting for default service account to be created ...
I0108 13:25:46.988179 11017 default_sa.go:45] found service account: "default"
I0108 13:25:46.988192 11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
I0108 13:25:46.988197 11017 system_pods.go:116] waiting for k8s-apps to be running ...
I0108 13:25:47.191037 11017 system_pods.go:86] 7 kube-system pods found
I0108 13:25:47.191051 11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:47.191056 11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:47.191059 11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:47.191062 11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:47.191068 11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:47.191071 11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:47.191075 11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
I0108 13:25:47.191079 11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
I0108 13:25:47.191083 11017 system_svc.go:44] waiting for kubelet service to be running ....
I0108 13:25:47.191143 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:47.200849 11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
I0108 13:25:47.200862 11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0108 13:25:47.200873 11017 node_conditions.go:102] verifying NodePressure condition ...
I0108 13:25:47.388983 11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0108 13:25:47.388998 11017 node_conditions.go:123] node cpu capacity is 2
I0108 13:25:47.389006 11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
I0108 13:25:47.389012 11017 start.go:217] waiting for startup goroutines ...
I0108 13:25:47.389347 11017 ssh_runner.go:195] Run: rm -f paused
I0108 13:25:47.433718 11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0108 13:25:47.476634 11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:48 UTC. --
Jan 08 21:25:25 pause-132406 dockerd[3702]: time="2023-01-08T21:25:25.290659171Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3314497202fc8ceeb27ca7190e02eafa07a3b2174edff746b07ed7a18bb2797e pid=5489 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310282074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310632315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310689702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.311253391Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc pid=5652 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590607044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590804961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590861947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.591114210Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cfca5ca38f1bb40a1f783df11849538c078a7ea84cd1507a93401e6ac921043c pid=5701 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696304305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696340262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696348212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696786183Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a037098dc5d0363118aa47fc6662a0cb9803f357dbe7488d39ac54fbda264a85 pid=5742 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852485421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852650670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852752050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.853144561Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fa6736cca283a0849a4133c4846ff785f1dabecc824ab55422b9fe1df5fb20e pid=5806 runtime=io.containerd.runc.v2
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693887010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693975959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693985352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.694546414Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8 pid=6088 runtime=io.containerd.runc.v2
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.042984174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043017322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043025311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043168759Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47155f3f92e2edb1c9b9544dbec392073d4a267f0e2e171c4c0c8f41eed1b42d pid=6226 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
47155f3f92e2e 6e38f40d628db 2 seconds ago Running storage-provisioner 0 7509d84ccc611
2fa6736cca283 5185b96f0becf 18 seconds ago Running coredns 2 2c864c071578b
a037098dc5d03 beaaf00edd38a 18 seconds ago Running kube-proxy 2 cfca5ca38f1bb
3314497202fc8 6d23ec0e8b87e 23 seconds ago Running kube-scheduler 3 7225e68b6cdb9
2702ef37e8c9f a8a176a5d5d69 23 seconds ago Running etcd 3 d6054662c415b
85b18341d5fa3 6039992312758 24 seconds ago Running kube-controller-manager 3 167990773c8df
e49c330971e33 0346dbd74bcb9 24 seconds ago Running kube-apiserver 3 7719cf6e2ded6
6c8e664a440de 6d23ec0e8b87e 26 seconds ago Created kube-scheduler 2 b17d288e92aba
5836a9370f77e beaaf00edd38a 26 seconds ago Created kube-proxy 1 d0c6f1675c8df
bf3a9fcdde4ed 5185b96f0becf 26 seconds ago Created coredns 1 80b9970570ee9
359f540cb31f6 6039992312758 27 seconds ago Created kube-controller-manager 2 82b65485dbb4d
b3ea39090c67a a8a176a5d5d69 27 seconds ago Exited etcd 2 d4f72481538e7
a59e122b43f1f 0346dbd74bcb9 27 seconds ago Exited kube-apiserver 2 b5535145a6cf3
c2ddc4b3adc5e 5185b96f0becf 53 seconds ago Exited coredns 0 11b52ad80c153
*
* ==> coredns [2fa6736cca28] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [bf3a9fcdde4e] <==
*
*
* ==> coredns [c2ddc4b3adc5] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: pause-132406
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-132406
kubernetes.io/os=linux
minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
minikube.k8s.io/name=pause-132406
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_08T13_24_43_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 08 Jan 2023 21:24:42 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-132406
AcquireTime: <unset>
RenewTime: Sun, 08 Jan 2023 21:25:39 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:25:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.27
Hostname: pause-132406
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 7ba663e0089540a7aff02be8cb7e7914
System UUID: c84e11ed-0000-0000-a16b-149d997fca88
Boot ID: e1c358fb-4be5-406e-aa57-71fbfb8be72e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-t2bdb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 54s
kube-system etcd-pause-132406 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 64s
kube-system kube-apiserver-pause-132406 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
kube-system kube-controller-manager-pause-132406 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 65s
kube-system kube-proxy-c2zj2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 54s
kube-system kube-scheduler-pause-132406 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 64s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 3s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 53s kube-proxy
Normal Starting 18s kube-proxy
Normal NodeHasSufficientPID 65s kubelet Node pause-132406 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 65s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 65s kubelet Node pause-132406 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 65s kubelet Node pause-132406 status is now: NodeHasNoDiskPressure
Normal NodeReady 65s kubelet Node pause-132406 status is now: NodeReady
Normal Starting 65s kubelet Starting kubelet.
Normal RegisteredNode 54s node-controller Node pause-132406 event: Registered Node pause-132406 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node pause-132406 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node pause-132406 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node pause-132406 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7s node-controller Node pause-132406 event: Registered Node pause-132406 in Controller
*
* ==> dmesg <==
* [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.891901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.787838] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +0.090038] systemd-fstab-generator[539]: Ignoring "noauto" for root device
[ +5.154737] systemd-fstab-generator[759]: Ignoring "noauto" for root device
[ +1.211845] kauditd_printk_skb: 16 callbacks suppressed
[ +0.212584] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.092038] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.088864] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.451512] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
[ +0.096327] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
[ +3.011005] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
[ +0.609217] kauditd_printk_skb: 68 callbacks suppressed
[ +14.122766] systemd-fstab-generator[1992]: Ignoring "noauto" for root device
[ +11.883875] kauditd_printk_skb: 8 callbacks suppressed
[ +5.253764] systemd-fstab-generator[2883]: Ignoring "noauto" for root device
[ +0.141331] systemd-fstab-generator[2894]: Ignoring "noauto" for root device
[Jan 8 21:25] systemd-fstab-generator[2905]: Ignoring "noauto" for root device
[ +0.401098] kauditd_printk_skb: 18 callbacks suppressed
[ +16.643218] systemd-fstab-generator[4108]: Ignoring "noauto" for root device
[ +0.107408] systemd-fstab-generator[4162]: Ignoring "noauto" for root device
[ +5.496654] systemd-fstab-generator[5099]: Ignoring "noauto" for root device
[ +6.803519] kauditd_printk_skb: 31 callbacks suppressed
*
* ==> etcd [2702ef37e8c9] <==
* {"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 is starting a new election at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgPreVoteResp from d9a8ee5ed7997f86 at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became candidate at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgVoteResp from d9a8ee5ed7997f86 at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became leader at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9a8ee5ed7997f86 elected leader d9a8ee5ed7997f86 at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d9a8ee5ed7997f86","local-member-attributes":"{Name:pause-132406 ClientURLs:[https://192.168.64.27:2379]}","request-path":"/0/members/d9a8ee5ed7997f86/attributes","cluster-id":"d657f6537ff55566","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-08T21:25:26.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.27:2379"}
*
* ==> etcd [b3ea39090c67] <==
* {"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"]}
{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-132406","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147
483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-01-08T21:25:22.355Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"417.198µs"}
{"level":"info","ts":"2023-01-08T21:25:22.363Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","commit-index":399}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=()"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became follower at term 3"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d9a8ee5ed7997f86 [peers: [], term: 3, commit: 399, applied: 0, lastindex: 399, lastterm: 3]"}
{"level":"warn","ts":"2023-01-08T21:25:22.366Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-01-08T21:25:22.367Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":382}
{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"d9a8ee5ed7997f86","timeout":"7s"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"d9a8ee5ed7997f86"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-08T21:25:22.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
{"level":"info","ts":"2023-01-08T21:25:22.370Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
*
* ==> kernel <==
* 21:25:49 up 1 min, 0 users, load average: 0.89, 0.30, 0.11
Linux pause-132406 5.10.57 #1 SMP Sun Jan 8 19:17:02 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [a59e122b43f1] <==
*
*
* ==> kube-apiserver [e49c330971e3] <==
* I0108 21:25:28.700297 1 controller.go:85] Starting OpenAPI V3 controller
I0108 21:25:28.700429 1 naming_controller.go:291] Starting NamingConditionController
I0108 21:25:28.700511 1 establishing_controller.go:76] Starting EstablishingController
I0108 21:25:28.700559 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0108 21:25:28.701254 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0108 21:25:28.701371 1 crd_finalizer.go:266] Starting CRDFinalizer
I0108 21:25:28.701544 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0108 21:25:28.702007 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0108 21:25:28.782489 1 shared_informer.go:262] Caches are synced for node_authorizer
I0108 21:25:28.791936 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0108 21:25:28.792377 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0108 21:25:28.792956 1 cache.go:39] Caches are synced for autoregister controller
I0108 21:25:28.793154 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0108 21:25:28.795220 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0108 21:25:28.800502 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0108 21:25:28.858432 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0108 21:25:29.472691 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0108 21:25:29.697351 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0108 21:25:30.382302 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0108 21:25:30.390603 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0108 21:25:30.412248 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0108 21:25:30.430489 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0108 21:25:30.435393 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0108 21:25:41.061315 1 controller.go:616] quota admission added evaluator for: endpoints
I0108 21:25:41.169568 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [359f540cb31f] <==
*
*
* ==> kube-controller-manager [85b18341d5fa] <==
* I0108 21:25:41.096790 1 shared_informer.go:262] Caches are synced for expand
I0108 21:25:41.099296 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0108 21:25:41.115597 1 shared_informer.go:262] Caches are synced for deployment
I0108 21:25:41.119008 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0108 21:25:41.133526 1 shared_informer.go:262] Caches are synced for node
I0108 21:25:41.133592 1 range_allocator.go:166] Starting range CIDR allocator
I0108 21:25:41.133606 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0108 21:25:41.133651 1 shared_informer.go:262] Caches are synced for cidrallocator
I0108 21:25:41.142840 1 shared_informer.go:262] Caches are synced for daemon sets
I0108 21:25:41.157497 1 shared_informer.go:262] Caches are synced for taint
I0108 21:25:41.157759 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0108 21:25:41.157993 1 taint_manager.go:209] "Sending events to api server"
I0108 21:25:41.157772 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0108 21:25:41.158417 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-132406. Assuming now as a timestamp.
I0108 21:25:41.158643 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0108 21:25:41.158045 1 event.go:294] "Event occurred" object="pause-132406" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-132406 event: Registered Node pause-132406 in Controller"
I0108 21:25:41.160214 1 shared_informer.go:262] Caches are synced for persistent volume
I0108 21:25:41.161892 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0108 21:25:41.162048 1 shared_informer.go:262] Caches are synced for GC
I0108 21:25:41.171471 1 shared_informer.go:262] Caches are synced for TTL
I0108 21:25:41.197886 1 shared_informer.go:262] Caches are synced for resource quota
I0108 21:25:41.235793 1 shared_informer.go:262] Caches are synced for resource quota
I0108 21:25:41.610248 1 shared_informer.go:262] Caches are synced for garbage collector
I0108 21:25:41.657226 1 shared_informer.go:262] Caches are synced for garbage collector
I0108 21:25:41.657437 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [5836a9370f77] <==
*
*
* ==> kube-proxy [a037098dc5d0] <==
* I0108 21:25:30.850478 1 node.go:163] Successfully retrieved node IP: 192.168.64.27
I0108 21:25:30.850523 1 server_others.go:138] "Detected node IP" address="192.168.64.27"
I0108 21:25:30.850546 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0108 21:25:30.900885 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0108 21:25:30.901123 1 server_others.go:206] "Using iptables Proxier"
I0108 21:25:30.901146 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0108 21:25:30.902097 1 server.go:661] "Version info" version="v1.25.3"
I0108 21:25:30.902216 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0108 21:25:30.902654 1 config.go:317] "Starting service config controller"
I0108 21:25:30.902693 1 shared_informer.go:255] Waiting for caches to sync for service config
I0108 21:25:30.902720 1 config.go:226] "Starting endpoint slice config controller"
I0108 21:25:30.902731 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0108 21:25:30.903108 1 config.go:444] "Starting node config controller"
I0108 21:25:30.903471 1 shared_informer.go:255] Waiting for caches to sync for node config
I0108 21:25:31.002821 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0108 21:25:31.002956 1 shared_informer.go:262] Caches are synced for service config
I0108 21:25:31.003974 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [3314497202fc] <==
* I0108 21:25:26.516401 1 serving.go:348] Generated self-signed cert in-memory
W0108 21:25:28.747634 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0108 21:25:28.747668 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0108 21:25:28.747676 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0108 21:25:28.747682 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0108 21:25:28.778528 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0108 21:25:28.778884 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0108 21:25:28.780681 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0108 21:25:28.780730 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0108 21:25:28.781271 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0108 21:25:28.780750 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0108 21:25:28.882423 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [6c8e664a440d] <==
*
*
* ==> kubelet <==
* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:50 UTC. --
Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.600279 5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.700806 5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.801519 5105 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.802334 5105 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818830 5105 kubelet_node_status.go:108] "Node was previously registered" node="pause-132406"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818970 5105 kubelet_node_status.go:73] "Successfully registered node" node="pause-132406"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.648000 5105 apiserver.go:52] "Watching apiserver"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649885 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649962 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.709981 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b1d4603-7531-4c5b-b5d1-17f4712c727e-config-volume\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710359 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4nq\" (UniqueName: \"kubernetes.io/projected/4b1d4603-7531-4c5b-b5d1-17f4712c727e-kube-api-access-bm4nq\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710457 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-proxy\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710554 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-xtables-lock\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710604 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzq48\" (UniqueName: \"kubernetes.io/projected/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-api-access-lzq48\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710707 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-lib-modules\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710785 5105 reconciler.go:169] "Reconciler: start to sync state"
Jan 08 21:25:30 pause-132406 kubelet[5105]: I0108 21:25:30.786116 5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc"
Jan 08 21:25:32 pause-132406 kubelet[5105]: I0108 21:25:32.815079 5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Jan 08 21:25:38 pause-132406 kubelet[5105]: I0108 21:25:38.949973 5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.283961 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:45 pause-132406 kubelet[5105]: E0108 21:25:45.284028 5105 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.284048 5105 memory_manager.go:345] "RemoveStaleState removing state" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.379908 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-tmp\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.380046 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dd8\" (UniqueName: \"kubernetes.io/projected/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-kube-api-access-b5dd8\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.964235 5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8"
*
* ==> storage-provisioner [47155f3f92e2] <==
* I0108 21:25:46.098217 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0108 21:25:46.107255 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0108 21:25:46.107432 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0108 21:25:46.112096 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0108 21:25:46.112481 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
I0108 21:25:46.113189 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4245f6bb-b0ff-44ce-bc47-687e46bad904", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5 became leader
I0108 21:25:46.217361 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-132406 -n pause-132406
helpers_test.go:261: (dbg) Run: kubectl --context pause-132406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-132406 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-132406 describe pod : exit status 1 (39.854254ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-132406 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-132406 -n pause-132406
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-132406 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-132406 logs -n 25: (2.828323591s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| delete | -p force-systemd-flag-131733 | force-systemd-flag-131733 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| start | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | docker-flags-131736 ssh | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-131736 ssh | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-131736 | docker-flags-131736 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:18 PST |
| start | -p cert-options-131823 | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:18 PST | 08 Jan 23 13:19 PST |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | cert-options-131823 ssh | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-131823 -- sudo | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-131823 | cert-options-131823 | jenkins | v1.28.0 | 08 Jan 23 13:19 PST | 08 Jan 23 13:19 PST |
| start | -p running-upgrade-131911 | running-upgrade-131911 | jenkins | v1.28.0 | 08 Jan 23 13:20 PST | 08 Jan 23 13:21 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p running-upgrade-131911 | running-upgrade-131911 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:21 PST |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:21 PST | 08 Jan 23 13:22 PST |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p cert-expiration-131814 | cert-expiration-131814 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:22 PST |
| stop | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:22 PST | 08 Jan 23 13:23 PST |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:23 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:23 PST | 08 Jan 23 13:24 PST |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p kubernetes-upgrade-132147 | kubernetes-upgrade-132147 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
| start | -p pause-132406 --memory=2048 | pause-132406 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:24 PST |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p stopped-upgrade-132230 | stopped-upgrade-132230 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
| | --memory=2200 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p pause-132406 | pause-132406 | jenkins | v1.28.0 | 08 Jan 23 13:24 PST | 08 Jan 23 13:25 PST |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p stopped-upgrade-132230 | stopped-upgrade-132230 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | 08 Jan 23 13:25 PST |
| start | -p NoKubernetes-132541 | NoKubernetes-132541 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | |
| | --no-kubernetes | | | | | |
| | --kubernetes-version=1.20 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p NoKubernetes-132541 | NoKubernetes-132541 | jenkins | v1.28.0 | 08 Jan 23 13:25 PST | |
| | --driver=hyperkit | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/01/08 13:25:41
Running on machine: MacOS-Agent-4
Binary: Built with gc go1.19.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0108 13:25:41.864570 11086 out.go:296] Setting OutFile to fd 1 ...
I0108 13:25:41.864753 11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:25:41.864756 11086 out.go:309] Setting ErrFile to fd 2...
I0108 13:25:41.864759 11086 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0108 13:25:41.864887 11086 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-3013/.minikube/bin
I0108 13:25:41.865398 11086 out.go:303] Setting JSON to false
I0108 13:25:41.884483 11086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5115,"bootTime":1673208026,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0.1","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
W0108 13:25:41.884582 11086 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0108 13:25:41.906843 11086 out.go:177] * [NoKubernetes-132541] minikube v1.28.0 on Darwin 13.0.1
I0108 13:25:41.948550 11086 notify.go:220] Checking for updates...
I0108 13:25:41.970851 11086 out.go:177] - MINIKUBE_LOCATION=15565
I0108 13:25:41.992551 11086 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:25:42.013641 11086 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0108 13:25:42.034777 11086 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0108 13:25:42.056742 11086 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-3013/.minikube
I0108 13:25:42.079417 11086 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:25:42.079463 11086 driver.go:365] Setting default libvirt URI to qemu:///system
I0108 13:25:42.107801 11086 out.go:177] * Using the hyperkit driver based on user configuration
I0108 13:25:42.149620 11086 start.go:294] selected driver: hyperkit
I0108 13:25:42.149635 11086 start.go:838] validating driver "hyperkit" against <nil>
I0108 13:25:42.149659 11086 start.go:849] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0108 13:25:42.149782 11086 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:25:42.150003 11086 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/15565-3013/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0108 13:25:42.158308 11086 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.28.0
I0108 13:25:42.161836 11086 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:42.161854 11086 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0108 13:25:42.161898 11086 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I0108 13:25:42.164327 11086 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
I0108 13:25:42.164430 11086 start_flags.go:892] Wait components to verify : map[apiserver:true system_pods:true]
I0108 13:25:42.164452 11086 cni.go:95] Creating CNI manager for ""
I0108 13:25:42.164459 11086 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0108 13:25:42.164468 11086 start_flags.go:317] config:
{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I0108 13:25:42.164581 11086 iso.go:125] acquiring lock: {Name:mk509bccdb22b8c95ebe7c0f784c1151265efda4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0108 13:25:42.222410 11086 out.go:177] * Starting control plane node NoKubernetes-132541 in cluster NoKubernetes-132541
I0108 13:25:42.259872 11086 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I0108 13:25:42.260043 11086 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I0108 13:25:42.260082 11086 cache.go:57] Caching tarball of preloaded images
I0108 13:25:42.260294 11086 preload.go:174] Found /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0108 13:25:42.260312 11086 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I0108 13:25:42.260462 11086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json ...
I0108 13:25:42.260510 11086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/NoKubernetes-132541/config.json: {Name:mkb313010fa03f74b48c17380336d5ac233d014a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:42.261062 11086 cache.go:193] Successfully downloaded all kic artifacts
I0108 13:25:42.261110 11086 start.go:364] acquiring machines lock for NoKubernetes-132541: {Name:mk29e5f49e96ee5817a491da62b8738aae3fb506 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0108 13:25:42.261283 11086 start.go:368] acquired machines lock for "NoKubernetes-132541" in 157.235µs
I0108 13:25:42.261346 11086 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-132541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15565/minikube-v1.28.0-1673190013-15565-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.25.3 ClusterName:NoKubernetes-132541 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 13:25:42.261435 11086 start.go:125] createHost starting for "" (driver="hyperkit")
I0108 13:25:40.902617 11017 pod_ready.go:102] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"False"
I0108 13:25:43.390748 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.390761 11017 pod_ready.go:81] duration metric: took 4.508462855s waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.390767 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400557 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.400568 11017 pod_ready.go:81] duration metric: took 9.796554ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.400574 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403156 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:43.403166 11017 pod_ready.go:81] duration metric: took 2.587107ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:43.403174 11017 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411451 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.411465 11017 pod_ready.go:81] duration metric: took 1.008282022s waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.411472 11017 pod_ready.go:38] duration metric: took 14.051708866s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.411481 11017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0108 13:25:44.418863 11017 ops.go:34] apiserver oom_adj: -16
I0108 13:25:44.418873 11017 kubeadm.go:631] restartCluster took 25.948745972s
I0108 13:25:44.418878 11017 kubeadm.go:398] StartCluster complete in 25.970227663s
I0108 13:25:44.418886 11017 settings.go:142] acquiring lock: {Name:mk8df047e431900506a7782529ec776808797932 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.418977 11017 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/15565-3013/kubeconfig
I0108 13:25:44.419424 11017 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-3013/kubeconfig: {Name:mk12e69a052d3b808fcdcd72ad62f9045d7b154d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0108 13:25:44.419963 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.421604 11017 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-132406" rescaled to 1
I0108 13:25:44.421632 11017 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.64.27 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0108 13:25:44.421642 11017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0108 13:25:44.421664 11017 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I0108 13:25:44.464718 11017 out.go:177] * Verifying Kubernetes components...
I0108 13:25:44.464758 11017 addons.go:65] Setting storage-provisioner=true in profile "pause-132406"
I0108 13:25:44.485523 11017 addons.go:227] Setting addon storage-provisioner=true in "pause-132406"
I0108 13:25:44.464765 11017 addons.go:65] Setting default-storageclass=true in profile "pause-132406"
I0108 13:25:44.421794 11017 config.go:180] Loaded profile config "pause-132406": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I0108 13:25:44.475539 11017 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
W0108 13:25:44.485550 11017 addons.go:236] addon storage-provisioner should already be in state true
I0108 13:25:44.485556 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:44.485552 11017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-132406"
I0108 13:25:44.485605 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.485877 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485890 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.485895 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.485909 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.493358 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52925
I0108 13:25:44.493704 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52927
I0108 13:25:44.493766 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494093 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.494097 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494108 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494325 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.494420 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.494437 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.494450 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.494517 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.494607 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.494633 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.495008 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.495031 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.496752 11017 node_ready.go:35] waiting up to 6m0s for node "pause-132406" to be "Ready" ...
I0108 13:25:44.497335 11017 kapi.go:59] client config for pause-132406: &rest.Config{Host:"https://192.168.64.27:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/profiles/pause-132406/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-3013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]str
ing(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2448d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0108 13:25:44.498930 11017 node_ready.go:49] node "pause-132406" has status "Ready":"True"
I0108 13:25:44.498941 11017 node_ready.go:38] duration metric: took 2.059705ms waiting for node "pause-132406" to be "Ready" ...
I0108 13:25:44.498947 11017 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:44.499578 11017 addons.go:227] Setting addon default-storageclass=true in "pause-132406"
W0108 13:25:44.499589 11017 addons.go:236] addon default-storageclass should already be in state true
I0108 13:25:44.499606 11017 host.go:66] Checking if "pause-132406" exists ...
I0108 13:25:44.499869 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.499888 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.502432 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52929
I0108 13:25:44.502793 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.503162 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.503182 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.503337 11017 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.503425 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.503542 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.503638 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.503741 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.505184 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.526650 11017 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0108 13:25:44.507300 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52931
I0108 13:25:44.527054 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.547766 11017 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.547777 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0108 13:25:44.547790 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.547909 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.548050 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.548063 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.548095 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.548190 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.548274 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.548290 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.548649 11017 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:44.548675 11017 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:44.555825 11017 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52934
I0108 13:25:44.556201 11017 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:44.556573 11017 main.go:134] libmachine: Using API Version 1
I0108 13:25:44.556585 11017 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:44.556785 11017 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:44.556890 11017 main.go:134] libmachine: (pause-132406) Calling .GetState
I0108 13:25:44.556978 11017 main.go:134] libmachine: (pause-132406) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.557074 11017 main.go:134] libmachine: (pause-132406) DBG | hyperkit pid from json: 10839
I0108 13:25:44.558022 11017 main.go:134] libmachine: (pause-132406) Calling .DriverName
I0108 13:25:44.558187 11017 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.558196 11017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0108 13:25:44.558205 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHHostname
I0108 13:25:44.558288 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHPort
I0108 13:25:44.558385 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHKeyPath
I0108 13:25:44.558470 11017 main.go:134] libmachine: (pause-132406) Calling .GetSSHUsername
I0108 13:25:44.558547 11017 sshutil.go:53] new ssh client: &{IP:192.168.64.27 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/pause-132406/id_rsa Username:docker}
I0108 13:25:44.587109 11017 pod_ready.go:92] pod "coredns-565d847f94-t2bdb" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.587119 11017 pod_ready.go:81] duration metric: took 83.771886ms waiting for pod "coredns-565d847f94-t2bdb" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.587128 11017 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.599174 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0108 13:25:44.609018 11017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0108 13:25:44.988772 11017 pod_ready.go:92] pod "etcd-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:44.988783 11017 pod_ready.go:81] duration metric: took 401.647771ms waiting for pod "etcd-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:44.988791 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.186660 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186678 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186841 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186866 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.186869 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.186878 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.186883 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.186898 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.186912 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187089 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187104 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187114 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187131 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187115 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187146 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187125 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187159 11017 main.go:134] libmachine: Making call to close driver server
I0108 13:25:45.187192 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187230 11017 main.go:134] libmachine: (pause-132406) Calling .Close
I0108 13:25:45.187349 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187402 11017 main.go:134] libmachine: (pause-132406) DBG | Closing plugin on server side
I0108 13:25:45.187401 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.187428 11017 main.go:134] libmachine: Successfully made call to close driver server
I0108 13:25:45.187436 11017 main.go:134] libmachine: Making call to close connection to plugin binary
I0108 13:25:45.245840 11017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0108 13:25:42.303634 11086 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I0108 13:25:42.304124 11086 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0108 13:25:42.304196 11086 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0108 13:25:42.312348 11086 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:52923
I0108 13:25:42.312703 11086 main.go:134] libmachine: () Calling .GetVersion
I0108 13:25:42.313112 11086 main.go:134] libmachine: Using API Version 1
I0108 13:25:42.313119 11086 main.go:134] libmachine: () Calling .SetConfigRaw
I0108 13:25:42.313353 11086 main.go:134] libmachine: () Calling .GetMachineName
I0108 13:25:42.313462 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetMachineName
I0108 13:25:42.313550 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .DriverName
I0108 13:25:42.313649 11086 start.go:159] libmachine.API.Create for "NoKubernetes-132541" (driver="hyperkit")
I0108 13:25:42.313676 11086 client.go:168] LocalClient.Create starting
I0108 13:25:42.313716 11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/ca.pem
I0108 13:25:42.313761 11086 main.go:134] libmachine: Decoding PEM data...
I0108 13:25:42.313774 11086 main.go:134] libmachine: Parsing certificate...
I0108 13:25:42.313838 11086 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-3013/.minikube/certs/cert.pem
I0108 13:25:42.313868 11086 main.go:134] libmachine: Decoding PEM data...
I0108 13:25:42.313878 11086 main.go:134] libmachine: Parsing certificate...
I0108 13:25:42.313894 11086 main.go:134] libmachine: Running pre-create checks...
I0108 13:25:42.313905 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .PreCreateCheck
I0108 13:25:42.313976 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.314125 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .GetConfigRaw
I0108 13:25:42.314532 11086 main.go:134] libmachine: Creating machine...
I0108 13:25:42.314538 11086 main.go:134] libmachine: (NoKubernetes-132541) Calling .Create
I0108 13:25:42.314603 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.314731 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.314597 11094 common.go:116] Making disk image using store path: /Users/jenkins/minikube-integration/15565-3013/.minikube
I0108 13:25:42.314792 11086 main.go:134] libmachine: (NoKubernetes-132541) Downloading /Users/jenkins/minikube-integration/15565-3013/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15565-3013/.minikube/cache/iso/amd64/minikube-v1.28.0-1673190013-15565-amd64.iso...
I0108 13:25:42.460398 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.460335 11094 common.go:123] Creating ssh key: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/id_rsa...
I0108 13:25:42.503141 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503046 11094 common.go:129] Creating raw disk image: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk...
I0108 13:25:42.503157 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing magic tar header
I0108 13:25:42.503171 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Writing SSH key tar header
I0108 13:25:42.503539 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | I0108 13:25:42.503489 11094 common.go:143] Fixing permissions on /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541 ...
I0108 13:25:42.653730 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.653744 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid
I0108 13:25:42.653784 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Using UUID 014f6508-8f9b-11ed-91e7-149d997fca88
I0108 13:25:42.678089 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Generated MAC 4e:f0:b3:1f:f:2b
I0108 13:25:42.678104 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541
I0108 13:25:42.678131 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0108 13:25:42.678168 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"014f6508-8f9b-11ed-91e7-149d997fca88", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000250e70)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage", Initrd:"/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd", Bootrom:"", CPUs:2, Memory:6000, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
I0108 13:25:42.678217 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid", "-c", "2", "-m", "6000M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "014f6508-8f9b-11ed-91e7-149d997fca88", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/m
inikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"}
I0108 13:25:42.678253 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/hyperkit.pid -c 2 -m 6000M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 014f6508-8f9b-11ed-91e7-149d997fca88 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/NoKubernetes-132541.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/tty,log=/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/console-ring -f kexec,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/bzimage,/Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes
-132541/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=NoKubernetes-132541"
I0108 13:25:42.678262 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Redirecting stdout/stderr to logger
I0108 13:25:42.679546 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 DEBUG: hyperkit: Pid is 11097
I0108 13:25:42.679869 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 0
I0108 13:25:42.679885 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:42.679938 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:42.680931 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:42.681019 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:42.681065 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:42.681091 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:42.681107 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:42.681116 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:42.681130 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:42.681144 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:42.681154 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:42.681167 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:42.681181 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:42.681193 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:42.681203 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:42.681218 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:42.681228 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:42.681241 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:42.681253 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:42.681264 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:42.681275 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:42.681297 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:42.681310 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:42.681320 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:42.681333 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:42.681343 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:42.681355 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:42.681370 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:42.681383 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:42.681398 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:42.686391 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
I0108 13:25:42.695558 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/15565-3013/.minikube/machines/NoKubernetes-132541/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
I0108 13:25:42.696153 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0108 13:25:42.696174 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0108 13:25:42.696188 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0108 13:25:42.696199 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:42 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0108 13:25:43.257820 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
I0108 13:25:43.257832 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
I0108 13:25:43.362873 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
I0108 13:25:43.362883 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
I0108 13:25:43.362890 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
I0108 13:25:43.362899 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
I0108 13:25:43.363783 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
I0108 13:25:43.363789 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | 2023/01/08 13:25:43 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
I0108 13:25:44.682748 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 1
I0108 13:25:44.682757 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:44.682837 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:44.684384 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:44.684451 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:44.684458 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:44.684475 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:44.684481 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:44.684487 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:44.684492 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:44.684504 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:44.684509 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:44.684516 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:44.684521 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:44.684527 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:44.684533 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:44.684541 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:44.684548 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:44.684554 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:44.684559 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:44.684578 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:44.684588 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:44.684597 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:44.684604 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:44.684610 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:44.684619 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:44.684625 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:44.684632 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:44.684637 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:44.684642 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:44.684651 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:46.686543 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Attempt 2
I0108 13:25:46.686557 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0108 13:25:46.686628 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | hyperkit pid from json: 11097
I0108 13:25:46.687410 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Searching for 4e:f0:b3:1f:f:2b in /var/db/dhcpd_leases ...
I0108 13:25:46.687458 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | Found 26 entries in /var/db/dhcpd_leases!
I0108 13:25:46.687466 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.26 HWAddress:da:4c:f9:c0:83:47 ID:1,da:4c:f9:c0:83:47 Lease:0x63bc8612}
I0108 13:25:46.687481 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.27 HWAddress:a2:44:36:6b:68:b8 ID:1,a2:44:36:6b:68:b8 Lease:0x63bc85ff}
I0108 13:25:46.687487 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.25 HWAddress:9a:64:4e:b9:b9:44 ID:1,9a:64:4e:b9:b9:44 Lease:0x63bb3475}
I0108 13:25:46.687494 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.24 HWAddress:ba:cc:22:10:41:cb ID:1,ba:cc:22:10:41:cb Lease:0x63bc84e7}
I0108 13:25:46.687499 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.23 HWAddress:3e:f7:d1:11:f9:61 ID:1,3e:f7:d1:11:f9:61 Lease:0x63bb334e}
I0108 13:25:46.687509 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.22 HWAddress:1a:20:c6:f:e0:2d ID:1,1a:20:c6:f:e0:2d Lease:0x63bc849e}
I0108 13:25:46.687514 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.21 HWAddress:7e:2d:4c:da:5f:85 ID:1,7e:2d:4c:da:5f:85 Lease:0x63bb331e}
I0108 13:25:46.687521 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.20 HWAddress:ea:2c:fd:1b:d6:7 ID:1,ea:2c:fd:1b:d6:7 Lease:0x63bb3315}
I0108 13:25:46.687526 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.19 HWAddress:ea:6f:3b:d4:62:ae ID:1,ea:6f:3b:d4:62:ae Lease:0x63bc8447}
I0108 13:25:46.687532 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.18 HWAddress:da:e6:bc:d0:c8:f2 ID:1,da:e6:bc:d0:c8:f2 Lease:0x63bc8436}
I0108 13:25:46.687540 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.17 HWAddress:e6:e4:fb:59:30:7a ID:1,e6:e4:fb:59:30:7a Lease:0x63bb32ac}
I0108 13:25:46.687545 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.16 HWAddress:56:6e:42:af:88:21 ID:1,56:6e:42:af:88:21 Lease:0x63bc837e}
I0108 13:25:46.687551 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.15 HWAddress:fa:f:28:59:92:81 ID:1,fa:f:28:59:92:81 Lease:0x63bc82ef}
I0108 13:25:46.687558 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.14 HWAddress:56:aa:6a:b7:76:a0 ID:1,56:aa:6a:b7:76:a0 Lease:0x63bc82be}
I0108 13:25:46.687564 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.13 HWAddress:ae:fc:4d:f4:df:e0 ID:1,ae:fc:4d:f4:df:e0 Lease:0x63bb2f04}
I0108 13:25:46.687569 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.12 HWAddress:be:f1:a2:69:d0:dc ID:1,be:f1:a2:69:d0:dc Lease:0x63bb3166}
I0108 13:25:46.687584 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.11 HWAddress:ce:11:55:19:1b:bc ID:1,ce:11:55:19:1b:bc Lease:0x63bb3164}
I0108 13:25:46.687591 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.10 HWAddress:1a:b6:29:53:dd:44 ID:1,1a:b6:29:53:dd:44 Lease:0x63bb2ae0}
I0108 13:25:46.687599 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.9 HWAddress:ea:1b:94:31:e9:2c ID:1,ea:1b:94:31:e9:2c Lease:0x63bb2acb}
I0108 13:25:46.687607 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.8 HWAddress:7e:95:4e:60:39:38 ID:1,7e:95:4e:60:39:38 Lease:0x63bb2aa5}
I0108 13:25:46.687612 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.7 HWAddress:56:7:65:39:b8:f4 ID:1,56:7:65:39:b8:f4 Lease:0x63bc7bd7}
I0108 13:25:46.687617 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.6 HWAddress:a:47:60:13:a:a6 ID:1,a:47:60:13:a:a6 Lease:0x63bc7b95}
I0108 13:25:46.687623 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.5 HWAddress:96:54:b2:b:96:5a ID:1,96:54:b2:b:96:5a Lease:0x63bb2a0b}
I0108 13:25:46.687629 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.4 HWAddress:22:33:31:80:e5:53 ID:1,22:33:31:80:e5:53 Lease:0x63bc79dc}
I0108 13:25:46.687636 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.3 HWAddress:c6:e3:59:ac:dc:8f ID:1,c6:e3:59:ac:dc:8f Lease:0x63bb2851}
I0108 13:25:46.687643 11086 main.go:134] libmachine: (NoKubernetes-132541) DBG | dhcp entry: {Name:minikube IPAddress:192.168.64.2 HWAddress:4a:1c:4:a4:25:f5 ID:1,4a:1c:4:a4:25:f5 Lease:0x63bc78c4}
I0108 13:25:45.282957 11017 addons.go:488] enableAddons completed in 861.279533ms
I0108 13:25:45.388516 11017 pod_ready.go:92] pod "kube-apiserver-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.388528 11017 pod_ready.go:81] duration metric: took 399.731294ms waiting for pod "kube-apiserver-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.388537 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787890 11017 pod_ready.go:92] pod "kube-controller-manager-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:45.787901 11017 pod_ready.go:81] duration metric: took 399.340179ms waiting for pod "kube-controller-manager-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:45.787908 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187439 11017 pod_ready.go:92] pod "kube-proxy-c2zj2" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.187453 11017 pod_ready.go:81] duration metric: took 399.536729ms waiting for pod "kube-proxy-c2zj2" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.187459 11017 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588219 11017 pod_ready.go:92] pod "kube-scheduler-pause-132406" in "kube-system" namespace has status "Ready":"True"
I0108 13:25:46.588232 11017 pod_ready.go:81] duration metric: took 400.763589ms waiting for pod "kube-scheduler-pause-132406" in "kube-system" namespace to be "Ready" ...
I0108 13:25:46.588239 11017 pod_ready.go:38] duration metric: took 2.0892776s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0108 13:25:46.588288 11017 api_server.go:51] waiting for apiserver process to appear ...
I0108 13:25:46.588361 11017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0108 13:25:46.598144 11017 api_server.go:71] duration metric: took 2.176485692s to wait for apiserver process to appear ...
I0108 13:25:46.598158 11017 api_server.go:87] waiting for apiserver healthz status ...
I0108 13:25:46.598165 11017 api_server.go:252] Checking apiserver healthz at https://192.168.64.27:8443/healthz ...
I0108 13:25:46.602085 11017 api_server.go:278] https://192.168.64.27:8443/healthz returned 200:
ok
I0108 13:25:46.602639 11017 api_server.go:140] control plane version: v1.25.3
I0108 13:25:46.602648 11017 api_server.go:130] duration metric: took 4.486281ms to wait for apiserver health ...
I0108 13:25:46.602654 11017 system_pods.go:43] waiting for kube-system pods to appear ...
I0108 13:25:46.791503 11017 system_pods.go:59] 7 kube-system pods found
I0108 13:25:46.791521 11017 system_pods.go:61] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:46.791526 11017 system_pods.go:61] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:46.791529 11017 system_pods.go:61] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:46.791533 11017 system_pods.go:61] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:46.791538 11017 system_pods.go:61] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:46.791542 11017 system_pods.go:61] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:46.791550 11017 system_pods.go:61] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0108 13:25:46.791556 11017 system_pods.go:74] duration metric: took 188.896938ms to wait for pod list to return data ...
I0108 13:25:46.791561 11017 default_sa.go:34] waiting for default service account to be created ...
I0108 13:25:46.988179 11017 default_sa.go:45] found service account: "default"
I0108 13:25:46.988192 11017 default_sa.go:55] duration metric: took 196.618556ms for default service account to be created ...
I0108 13:25:46.988197 11017 system_pods.go:116] waiting for k8s-apps to be running ...
I0108 13:25:47.191037 11017 system_pods.go:86] 7 kube-system pods found
I0108 13:25:47.191051 11017 system_pods.go:89] "coredns-565d847f94-t2bdb" [4b1d4603-7531-4c5b-b5d1-17f4712c727e] Running
I0108 13:25:47.191056 11017 system_pods.go:89] "etcd-pause-132406" [69af71f7-0f42-4ea6-98f6-5720512baa84] Running
I0108 13:25:47.191059 11017 system_pods.go:89] "kube-apiserver-pause-132406" [e8443dca-cdec-4e05-8ae7-d5ed49988ffa] Running
I0108 13:25:47.191062 11017 system_pods.go:89] "kube-controller-manager-pause-132406" [01efd276-f21b-4309-ba40-73d8e0790774] Running
I0108 13:25:47.191068 11017 system_pods.go:89] "kube-proxy-c2zj2" [06f5a965-c191-491e-a8ca-81e45cdab1e0] Running
I0108 13:25:47.191071 11017 system_pods.go:89] "kube-scheduler-pause-132406" [73b60b1b-4f6f-474f-ba27-15a6c1019ffb] Running
I0108 13:25:47.191075 11017 system_pods.go:89] "storage-provisioner" [a4d0a073-64e2-44d3-b701-67c31b2c9dcb] Running
I0108 13:25:47.191079 11017 system_pods.go:126] duration metric: took 202.877582ms to wait for k8s-apps to be running ...
I0108 13:25:47.191083 11017 system_svc.go:44] waiting for kubelet service to be running ....
I0108 13:25:47.191143 11017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0108 13:25:47.200849 11017 system_svc.go:56] duration metric: took 9.761745ms WaitForService to wait for kubelet.
I0108 13:25:47.200862 11017 kubeadm.go:573] duration metric: took 2.779206372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0108 13:25:47.200873 11017 node_conditions.go:102] verifying NodePressure condition ...
I0108 13:25:47.388983 11017 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0108 13:25:47.388998 11017 node_conditions.go:123] node cpu capacity is 2
I0108 13:25:47.389006 11017 node_conditions.go:105] duration metric: took 188.128513ms to run NodePressure ...
I0108 13:25:47.389012 11017 start.go:217] waiting for startup goroutines ...
I0108 13:25:47.389347 11017 ssh_runner.go:195] Run: rm -f paused
I0108 13:25:47.433718 11017 start.go:536] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
I0108 13:25:47.476634 11017 out.go:177] * Done! kubectl is now configured to use "pause-132406" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:51 UTC. --
Jan 08 21:25:25 pause-132406 dockerd[3702]: time="2023-01-08T21:25:25.290659171Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3314497202fc8ceeb27ca7190e02eafa07a3b2174edff746b07ed7a18bb2797e pid=5489 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310282074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310632315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.310689702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.311253391Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc pid=5652 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590607044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590804961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.590861947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.591114210Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/cfca5ca38f1bb40a1f783df11849538c078a7ea84cd1507a93401e6ac921043c pid=5701 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696304305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696340262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696348212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.696786183Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a037098dc5d0363118aa47fc6662a0cb9803f357dbe7488d39ac54fbda264a85 pid=5742 runtime=io.containerd.runc.v2
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852485421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852650670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.852752050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:30 pause-132406 dockerd[3702]: time="2023-01-08T21:25:30.853144561Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fa6736cca283a0849a4133c4846ff785f1dabecc824ab55422b9fe1df5fb20e pid=5806 runtime=io.containerd.runc.v2
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693887010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693975959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.693985352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:45 pause-132406 dockerd[3702]: time="2023-01-08T21:25:45.694546414Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8 pid=6088 runtime=io.containerd.runc.v2
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.042984174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043017322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043025311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 08 21:25:46 pause-132406 dockerd[3702]: time="2023-01-08T21:25:46.043168759Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/47155f3f92e2edb1c9b9544dbec392073d4a267f0e2e171c4c0c8f41eed1b42d pid=6226 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
47155f3f92e2e 6e38f40d628db 5 seconds ago Running storage-provisioner 0 7509d84ccc611
2fa6736cca283 5185b96f0becf 21 seconds ago Running coredns 2 2c864c071578b
a037098dc5d03 beaaf00edd38a 21 seconds ago Running kube-proxy 2 cfca5ca38f1bb
3314497202fc8 6d23ec0e8b87e 26 seconds ago Running kube-scheduler 3 7225e68b6cdb9
2702ef37e8c9f a8a176a5d5d69 26 seconds ago Running etcd 3 d6054662c415b
85b18341d5fa3 6039992312758 27 seconds ago Running kube-controller-manager 3 167990773c8df
e49c330971e33 0346dbd74bcb9 27 seconds ago Running kube-apiserver 3 7719cf6e2ded6
6c8e664a440de 6d23ec0e8b87e 29 seconds ago Created kube-scheduler 2 b17d288e92aba
5836a9370f77e beaaf00edd38a 29 seconds ago Created kube-proxy 1 d0c6f1675c8df
bf3a9fcdde4ed 5185b96f0becf 29 seconds ago Created coredns 1 80b9970570ee9
359f540cb31f6 6039992312758 30 seconds ago Created kube-controller-manager 2 82b65485dbb4d
b3ea39090c67a a8a176a5d5d69 30 seconds ago Exited etcd 2 d4f72481538e7
a59e122b43f1f 0346dbd74bcb9 30 seconds ago Exited kube-apiserver 2 b5535145a6cf3
c2ddc4b3adc5e 5185b96f0becf 56 seconds ago Exited coredns 0 11b52ad80c153
*
* ==> coredns [2fa6736cca28] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 7135f430aea492809ab227b028bd16c96f6629e00404d9ec4f44cae029eb3743d1cfe4a9d0cc8fbbd4cfa53556972f2bbf615e7c9e8412e85d290539257166ad
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [bf3a9fcdde4e] <==
*
*
* ==> coredns [c2ddc4b3adc5] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: pause-132406
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-132406
kubernetes.io/os=linux
minikube.k8s.io/commit=85283e47cf16e06ca2b7e3404d99b788f950f286
minikube.k8s.io/name=pause-132406
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_01_08T13_24_43_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 08 Jan 2023 21:24:42 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-132406
AcquireTime: <unset>
RenewTime: Sun, 08 Jan 2023 21:25:49 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:24:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 08 Jan 2023 21:25:28 +0000 Sun, 08 Jan 2023 21:25:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.27
Hostname: pause-132406
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017572Ki
pods: 110
System Info:
Machine ID: 7ba663e0089540a7aff02be8cb7e7914
System UUID: c84e11ed-0000-0000-a16b-149d997fca88
Boot ID: e1c358fb-4be5-406e-aa57-71fbfb8be72e
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.21
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-t2bdb 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 58s
kube-system etcd-pause-132406 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 68s
kube-system kube-apiserver-pause-132406 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 69s
kube-system kube-controller-manager-pause-132406 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 69s
kube-system kube-proxy-c2zj2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 58s
kube-system kube-scheduler-pause-132406 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 68s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 7s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 57s kube-proxy
Normal Starting 21s kube-proxy
Normal NodeHasSufficientPID 69s kubelet Node pause-132406 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 69s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 69s kubelet Node pause-132406 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 69s kubelet Node pause-132406 status is now: NodeHasNoDiskPressure
Normal NodeReady 69s kubelet Node pause-132406 status is now: NodeReady
Normal Starting 69s kubelet Starting kubelet.
Normal RegisteredNode 58s node-controller Node pause-132406 event: Registered Node pause-132406 in Controller
Normal Starting 29s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29s (x8 over 29s) kubelet Node pause-132406 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29s (x8 over 29s) kubelet Node pause-132406 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29s (x7 over 29s) kubelet Node pause-132406 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 11s node-controller Node pause-132406 event: Registered Node pause-132406 in Controller
*
* ==> dmesg <==
* [ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.891901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +0.787838] systemd-fstab-generator[528]: Ignoring "noauto" for root device
[ +0.090038] systemd-fstab-generator[539]: Ignoring "noauto" for root device
[ +5.154737] systemd-fstab-generator[759]: Ignoring "noauto" for root device
[ +1.211845] kauditd_printk_skb: 16 callbacks suppressed
[ +0.212584] systemd-fstab-generator[921]: Ignoring "noauto" for root device
[ +0.092038] systemd-fstab-generator[932]: Ignoring "noauto" for root device
[ +0.088864] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +1.451512] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
[ +0.096327] systemd-fstab-generator[1105]: Ignoring "noauto" for root device
[ +3.011005] systemd-fstab-generator[1323]: Ignoring "noauto" for root device
[ +0.609217] kauditd_printk_skb: 68 callbacks suppressed
[ +14.122766] systemd-fstab-generator[1992]: Ignoring "noauto" for root device
[ +11.883875] kauditd_printk_skb: 8 callbacks suppressed
[ +5.253764] systemd-fstab-generator[2883]: Ignoring "noauto" for root device
[ +0.141331] systemd-fstab-generator[2894]: Ignoring "noauto" for root device
[Jan 8 21:25] systemd-fstab-generator[2905]: Ignoring "noauto" for root device
[ +0.401098] kauditd_printk_skb: 18 callbacks suppressed
[ +16.643218] systemd-fstab-generator[4108]: Ignoring "noauto" for root device
[ +0.107408] systemd-fstab-generator[4162]: Ignoring "noauto" for root device
[ +5.496654] systemd-fstab-generator[5099]: Ignoring "noauto" for root device
[ +6.803519] kauditd_printk_skb: 31 callbacks suppressed
*
* ==> etcd [2702ef37e8c9] <==
* {"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-08T21:25:25.967Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:25.968Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:25.971Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-08T21:25:25.972Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 is starting a new election at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became pre-candidate at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgPreVoteResp from d9a8ee5ed7997f86 at term 3"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became candidate at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 received MsgVoteResp from d9a8ee5ed7997f86 at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became leader at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9a8ee5ed7997f86 elected leader d9a8ee5ed7997f86 at term 4"}
{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d9a8ee5ed7997f86","local-member-attributes":"{Name:pause-132406 ClientURLs:[https://192.168.64.27:2379]}","request-path":"/0/members/d9a8ee5ed7997f86/attributes","cluster-id":"d657f6537ff55566","publish-timeout":"7s"}
{"level":"info","ts":"2023-01-08T21:25:26.967Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-01-08T21:25:26.968Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-01-08T21:25:26.970Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.27:2379"}
*
* ==> etcd [b3ea39090c67] <==
* {"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"]}
{"level":"info","ts":"2023-01-08T21:25:22.354Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.4","git-sha":"08407ff76","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-132406","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147
483648,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2023-01-08T21:25:22.355Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"417.198µs"}
{"level":"info","ts":"2023-01-08T21:25:22.363Z","caller":"etcdserver/server.go:529","msg":"No snapshot found. Recovering WAL from scratch!"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","caller":"etcdserver/raft.go:483","msg":"restarting local member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","commit-index":399}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=()"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 became follower at term 3"}
{"level":"info","ts":"2023-01-08T21:25:22.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft d9a8ee5ed7997f86 [peers: [], term: 3, commit: 399, applied: 0, lastindex: 399, lastterm: 3]"}
{"level":"warn","ts":"2023-01-08T21:25:22.366Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2023-01-08T21:25:22.367Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":382}
{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2023-01-08T21:25:22.368Z","caller":"etcdserver/corrupt.go:46","msg":"starting initial corruption check","local-member-id":"d9a8ee5ed7997f86","timeout":"7s"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/corrupt.go:116","msg":"initial corruption checking passed; no corruption","local-member-id":"d9a8ee5ed7997f86"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d9a8ee5ed7997f86","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
{"level":"info","ts":"2023-01-08T21:25:22.369Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2023-01-08T21:25:22.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9a8ee5ed7997f86 switched to configuration voters=(15684047793429249926)"}
{"level":"info","ts":"2023-01-08T21:25:22.370Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","added-peer-id":"d9a8ee5ed7997f86","added-peer-peer-urls":["https://192.168.64.27:2380"]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d657f6537ff55566","local-member-id":"d9a8ee5ed7997f86","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d9a8ee5ed7997f86","initial-advertise-peer-urls":["https://192.168.64.27:2380"],"listen-peer-urls":["https://192.168.64.27:2380"],"advertise-client-urls":["https://192.168.64.27:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.27:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.27:2380"}
{"level":"info","ts":"2023-01-08T21:25:22.371Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.27:2380"}
*
* ==> kernel <==
* 21:25:52 up 1 min, 0 users, load average: 0.89, 0.30, 0.11
Linux pause-132406 5.10.57 #1 SMP Sun Jan 8 19:17:02 UTC 2023 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [a59e122b43f1] <==
*
*
* ==> kube-apiserver [e49c330971e3] <==
* I0108 21:25:28.700297 1 controller.go:85] Starting OpenAPI V3 controller
I0108 21:25:28.700429 1 naming_controller.go:291] Starting NamingConditionController
I0108 21:25:28.700511 1 establishing_controller.go:76] Starting EstablishingController
I0108 21:25:28.700559 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0108 21:25:28.701254 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0108 21:25:28.701371 1 crd_finalizer.go:266] Starting CRDFinalizer
I0108 21:25:28.701544 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0108 21:25:28.702007 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0108 21:25:28.782489 1 shared_informer.go:262] Caches are synced for node_authorizer
I0108 21:25:28.791936 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0108 21:25:28.792377 1 apf_controller.go:305] Running API Priority and Fairness config worker
I0108 21:25:28.792956 1 cache.go:39] Caches are synced for autoregister controller
I0108 21:25:28.793154 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0108 21:25:28.795220 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0108 21:25:28.800502 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0108 21:25:28.858432 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I0108 21:25:29.472691 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0108 21:25:29.697351 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0108 21:25:30.382302 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I0108 21:25:30.390603 1 controller.go:616] quota admission added evaluator for: deployments.apps
I0108 21:25:30.412248 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I0108 21:25:30.430489 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0108 21:25:30.435393 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0108 21:25:41.061315 1 controller.go:616] quota admission added evaluator for: endpoints
I0108 21:25:41.169568 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [359f540cb31f] <==
*
*
* ==> kube-controller-manager [85b18341d5fa] <==
* I0108 21:25:41.096790 1 shared_informer.go:262] Caches are synced for expand
I0108 21:25:41.099296 1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
I0108 21:25:41.115597 1 shared_informer.go:262] Caches are synced for deployment
I0108 21:25:41.119008 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0108 21:25:41.133526 1 shared_informer.go:262] Caches are synced for node
I0108 21:25:41.133592 1 range_allocator.go:166] Starting range CIDR allocator
I0108 21:25:41.133606 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I0108 21:25:41.133651 1 shared_informer.go:262] Caches are synced for cidrallocator
I0108 21:25:41.142840 1 shared_informer.go:262] Caches are synced for daemon sets
I0108 21:25:41.157497 1 shared_informer.go:262] Caches are synced for taint
I0108 21:25:41.157759 1 taint_manager.go:204] "Starting NoExecuteTaintManager"
I0108 21:25:41.157993 1 taint_manager.go:209] "Sending events to api server"
I0108 21:25:41.157772 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0108 21:25:41.158417 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-132406. Assuming now as a timestamp.
I0108 21:25:41.158643 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0108 21:25:41.158045 1 event.go:294] "Event occurred" object="pause-132406" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-132406 event: Registered Node pause-132406 in Controller"
I0108 21:25:41.160214 1 shared_informer.go:262] Caches are synced for persistent volume
I0108 21:25:41.161892 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0108 21:25:41.162048 1 shared_informer.go:262] Caches are synced for GC
I0108 21:25:41.171471 1 shared_informer.go:262] Caches are synced for TTL
I0108 21:25:41.197886 1 shared_informer.go:262] Caches are synced for resource quota
I0108 21:25:41.235793 1 shared_informer.go:262] Caches are synced for resource quota
I0108 21:25:41.610248 1 shared_informer.go:262] Caches are synced for garbage collector
I0108 21:25:41.657226 1 shared_informer.go:262] Caches are synced for garbage collector
I0108 21:25:41.657437 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [5836a9370f77] <==
*
*
* ==> kube-proxy [a037098dc5d0] <==
* I0108 21:25:30.850478 1 node.go:163] Successfully retrieved node IP: 192.168.64.27
I0108 21:25:30.850523 1 server_others.go:138] "Detected node IP" address="192.168.64.27"
I0108 21:25:30.850546 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0108 21:25:30.900885 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0108 21:25:30.901123 1 server_others.go:206] "Using iptables Proxier"
I0108 21:25:30.901146 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0108 21:25:30.902097 1 server.go:661] "Version info" version="v1.25.3"
I0108 21:25:30.902216 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0108 21:25:30.902654 1 config.go:317] "Starting service config controller"
I0108 21:25:30.902693 1 shared_informer.go:255] Waiting for caches to sync for service config
I0108 21:25:30.902720 1 config.go:226] "Starting endpoint slice config controller"
I0108 21:25:30.902731 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0108 21:25:30.903108 1 config.go:444] "Starting node config controller"
I0108 21:25:30.903471 1 shared_informer.go:255] Waiting for caches to sync for node config
I0108 21:25:31.002821 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0108 21:25:31.002956 1 shared_informer.go:262] Caches are synced for service config
I0108 21:25:31.003974 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-scheduler [3314497202fc] <==
* I0108 21:25:26.516401 1 serving.go:348] Generated self-signed cert in-memory
W0108 21:25:28.747634 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0108 21:25:28.747668 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0108 21:25:28.747676 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W0108 21:25:28.747682 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0108 21:25:28.778528 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I0108 21:25:28.778884 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0108 21:25:28.780681 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0108 21:25:28.780730 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0108 21:25:28.781271 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0108 21:25:28.780750 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0108 21:25:28.882423 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [6c8e664a440d] <==
*
*
* ==> kubelet <==
* -- Journal begins at Sun 2023-01-08 21:24:13 UTC, ends at Sun 2023-01-08 21:25:53 UTC. --
Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.600279 5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
Jan 08 21:25:28 pause-132406 kubelet[5105]: E0108 21:25:28.700806 5105 kubelet.go:2448] "Error getting node" err="node \"pause-132406\" not found"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.801519 5105 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.802334 5105 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818830 5105 kubelet_node_status.go:108] "Node was previously registered" node="pause-132406"
Jan 08 21:25:28 pause-132406 kubelet[5105]: I0108 21:25:28.818970 5105 kubelet_node_status.go:73] "Successfully registered node" node="pause-132406"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.648000 5105 apiserver.go:52] "Watching apiserver"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649885 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.649962 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.709981 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b1d4603-7531-4c5b-b5d1-17f4712c727e-config-volume\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710359 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4nq\" (UniqueName: \"kubernetes.io/projected/4b1d4603-7531-4c5b-b5d1-17f4712c727e-kube-api-access-bm4nq\") pod \"coredns-565d847f94-t2bdb\" (UID: \"4b1d4603-7531-4c5b-b5d1-17f4712c727e\") " pod="kube-system/coredns-565d847f94-t2bdb"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710457 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-proxy\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710554 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-xtables-lock\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710604 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzq48\" (UniqueName: \"kubernetes.io/projected/06f5a965-c191-491e-a8ca-81e45cdab1e0-kube-api-access-lzq48\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710707 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06f5a965-c191-491e-a8ca-81e45cdab1e0-lib-modules\") pod \"kube-proxy-c2zj2\" (UID: \"06f5a965-c191-491e-a8ca-81e45cdab1e0\") " pod="kube-system/kube-proxy-c2zj2"
Jan 08 21:25:29 pause-132406 kubelet[5105]: I0108 21:25:29.710785 5105 reconciler.go:169] "Reconciler: start to sync state"
Jan 08 21:25:30 pause-132406 kubelet[5105]: I0108 21:25:30.786116 5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2c864c071578be13dc25e84f4d73ec21beecae7650ed31f40171521323b956bc"
Jan 08 21:25:32 pause-132406 kubelet[5105]: I0108 21:25:32.815079 5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Jan 08 21:25:38 pause-132406 kubelet[5105]: I0108 21:25:38.949973 5105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.283961 5105 topology_manager.go:205] "Topology Admit Handler"
Jan 08 21:25:45 pause-132406 kubelet[5105]: E0108 21:25:45.284028 5105 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.284048 5105 memory_manager.go:345] "RemoveStaleState removing state" podUID="877d71f1-d869-4d8d-8534-9b676cc5beb0" containerName="coredns"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.379908 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-tmp\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.380046 5105 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dd8\" (UniqueName: \"kubernetes.io/projected/a4d0a073-64e2-44d3-b701-67c31b2c9dcb-kube-api-access-b5dd8\") pod \"storage-provisioner\" (UID: \"a4d0a073-64e2-44d3-b701-67c31b2c9dcb\") " pod="kube-system/storage-provisioner"
Jan 08 21:25:45 pause-132406 kubelet[5105]: I0108 21:25:45.964235 5105 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7509d84ccc611055e0a390b6d4f9edf99f5625ea09b62d1eae87e614b0930aa8"
*
* ==> storage-provisioner [47155f3f92e2] <==
* I0108 21:25:46.098217 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0108 21:25:46.107255 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0108 21:25:46.107432 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0108 21:25:46.112096 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0108 21:25:46.112481 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
I0108 21:25:46.113189 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4245f6bb-b0ff-44ce-bc47-687e46bad904", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5 became leader
I0108 21:25:46.217361 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-132406_786bf454-c8d9-4c47-a499-d6161363a1e5!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-132406 -n pause-132406
helpers_test.go:261: (dbg) Run: kubectl --context pause-132406 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-132406 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-132406 describe pod : exit status 1 (39.902902ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-132406 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (55.05s)