=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-darwin-amd64 start -p pause-20220725124607-24757 --alsologtostderr -v=1 --driver=hyperkit
E0725 12:47:05.217322 24757 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/skaffold-20220725124025-24757/client.crt: no such file or directory
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220725124607-24757 --alsologtostderr -v=1 --driver=hyperkit : (59.68422089s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-20220725124607-24757] minikube v1.26.0 on Darwin 12.4
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
* Using the hyperkit driver based on existing profile
* Starting control plane node pause-20220725124607-24757 in cluster pause-20220725124607-24757
* Updating the running hyperkit "pause-20220725124607-24757" VM ...
* Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-20220725124607-24757" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0725 12:47:03.968537 32449 out.go:296] Setting OutFile to fd 1 ...
I0725 12:47:03.968759 32449 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:03.968766 32449 out.go:309] Setting ErrFile to fd 2...
I0725 12:47:03.968771 32449 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:03.968881 32449 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
I0725 12:47:03.969323 32449 out.go:303] Setting JSON to false
I0725 12:47:03.985948 32449 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":9996,"bootTime":1658768427,"procs":362,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0725 12:47:03.986036 32449 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0725 12:47:04.026068 32449 out.go:177] * [pause-20220725124607-24757] minikube v1.26.0 on Darwin 12.4
I0725 12:47:04.047436 32449 notify.go:193] Checking for updates...
I0725 12:47:04.068466 32449 out.go:177] - MINIKUBE_LOCATION=14555
I0725 12:47:04.143167 32449 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:04.201096 32449 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0725 12:47:04.260238 32449 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0725 12:47:04.317333 32449 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
I0725 12:47:04.356071 32449 config.go:178] Loaded profile config "pause-20220725124607-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:47:04.356780 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:04.356855 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:04.364000 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51411
I0725 12:47:04.364359 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:04.364753 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:04.364764 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:04.364993 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:04.365098 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:04.365208 32449 driver.go:365] Setting default libvirt URI to qemu:///system
I0725 12:47:04.365466 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:04.365488 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:04.371585 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51413
I0725 12:47:04.371888 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:04.372204 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:04.372215 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:04.372413 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:04.372502 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:04.399108 32449 out.go:177] * Using the hyperkit driver based on existing profile
I0725 12:47:04.420210 32449 start.go:284] selected driver: hyperkit
I0725 12:47:04.420231 32449 start.go:808] validating driver "hyperkit" against &{Name:pause-20220725124607-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:04.420364 32449 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0725 12:47:04.420441 32449 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:04.420563 32449 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0725 12:47:04.427186 32449 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.26.0
I0725 12:47:04.430161 32449 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:04.430179 32449 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0725 12:47:04.432090 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:04.432108 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:04.432132 32449 start_flags.go:310] config:
{Name:pause-20220725124607-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.2 ClusterName:pause-20220725124607-2475
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:d
ocker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:04.432302 32449 iso.go:128] acquiring lock: {Name:mk75e62a3ceeaef3aefa2a3a9c617c6e59d820a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:04.453479 32449 out.go:177] * Starting control plane node pause-20220725124607-24757 in cluster pause-20220725124607-24757
I0725 12:47:04.475156 32449 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0725 12:47:04.475238 32449 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4
I0725 12:47:04.475272 32449 cache.go:57] Caching tarball of preloaded images
I0725 12:47:04.475441 32449 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0725 12:47:04.475462 32449 cache.go:60] Finished verifying existence of preloaded tar for v1.24.2 on docker
I0725 12:47:04.475630 32449 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/config.json ...
I0725 12:47:04.476438 32449 cache.go:208] Successfully downloaded all kic artifacts
I0725 12:47:04.476496 32449 start.go:370] acquiring machines lock for pause-20220725124607-24757: {Name:mk6dd10c27893192a420c40bba76224953275f58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0725 12:47:04.476586 32449 start.go:374] acquired machines lock for "pause-20220725124607-24757" in 71.849µs
I0725 12:47:04.476615 32449 start.go:95] Skipping create...Using existing machine configuration
I0725 12:47:04.476632 32449 fix.go:55] fixHost starting:
I0725 12:47:04.477068 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:04.477112 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:04.484031 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51415
I0725 12:47:04.484399 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:04.484715 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:04.484728 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:04.484977 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:04.485092 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:04.485175 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:04.485255 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:04.485332 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:04.486115 32449 fix.go:103] recreateIfNeeded on pause-20220725124607-24757: state=Running err=<nil>
W0725 12:47:04.486131 32449 fix.go:129] unexpected machine state, will restart: <nil>
I0725 12:47:04.528240 32449 out.go:177] * Updating the running hyperkit "pause-20220725124607-24757" VM ...
I0725 12:47:04.548969 32449 machine.go:88] provisioning docker machine ...
I0725 12:47:04.549008 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:04.549299 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetMachineName
I0725 12:47:04.549462 32449 buildroot.go:166] provisioning hostname "pause-20220725124607-24757"
I0725 12:47:04.549477 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetMachineName
I0725 12:47:04.549610 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.549750 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:04.549870 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.549977 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.550081 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:04.550236 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:04.550464 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:04.550477 32449 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-20220725124607-24757 && echo "pause-20220725124607-24757" | sudo tee /etc/hostname
I0725 12:47:04.640577 32449 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220725124607-24757
I0725 12:47:04.640602 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.640768 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:04.640857 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.640934 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.641032 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:04.641164 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:04.641280 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:04.641294 32449 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-20220725124607-24757' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220725124607-24757/g' /etc/hosts;
else
echo '127.0.1.1 pause-20220725124607-24757' | sudo tee -a /etc/hosts;
fi
fi
I0725 12:47:04.718578 32449 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0725 12:47:04.718598 32449 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemo
tePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
I0725 12:47:04.718625 32449 buildroot.go:174] setting up certificates
I0725 12:47:04.718638 32449 provision.go:83] configureAuth start
I0725 12:47:04.718645 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetMachineName
I0725 12:47:04.718780 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetIP
I0725 12:47:04.718867 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.718952 32449 provision.go:138] copyHostCerts
I0725 12:47:04.719025 32449 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
I0725 12:47:04.719035 32449 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
I0725 12:47:04.719154 32449 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1078 bytes)
I0725 12:47:04.719362 32449 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
I0725 12:47:04.719371 32449 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
I0725 12:47:04.719429 32449 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
I0725 12:47:04.719578 32449 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
I0725 12:47:04.719583 32449 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
I0725 12:47:04.719637 32449 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1679 bytes)
I0725 12:47:04.719758 32449 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.pause-20220725124607-24757 san=[192.168.64.23 192.168.64.23 localhost 127.0.0.1 minikube pause-20220725124607-24757]
I0725 12:47:04.787628 32449 provision.go:172] copyRemoteCerts
I0725 12:47:04.787764 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0725 12:47:04.787800 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.788098 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:04.788264 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.788436 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:04.788596 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:04.836347 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0725 12:47:04.853852 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
I0725 12:47:04.870926 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0725 12:47:04.888274 32449 provision.go:86] duration metric: configureAuth took 169.6268ms
I0725 12:47:04.888288 32449 buildroot.go:189] setting minikube options for container-runtime
I0725 12:47:04.888441 32449 config.go:178] Loaded profile config "pause-20220725124607-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:47:04.888467 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:04.888598 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.888672 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:04.888756 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.888842 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.888925 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:04.889028 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:04.889130 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:04.889137 32449 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0725 12:47:04.967990 32449 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0725 12:47:04.968006 32449 buildroot.go:70] root file system type: tmpfs
I0725 12:47:04.968111 32449 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0725 12:47:04.968125 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:04.968272 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:04.968367 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.968467 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:04.968565 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:04.968726 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:04.968854 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:04.968904 32449 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0725 12:47:05.054322 32449 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0725 12:47:05.054342 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:05.054479 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:05.054565 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.054665 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.054770 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:05.054899 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:05.055011 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:05.055025 32449 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0725 12:47:05.136911 32449 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0725 12:47:05.136928 32449 machine.go:91] provisioned docker machine in 587.950599ms
I0725 12:47:05.136951 32449 start.go:307] post-start starting for "pause-20220725124607-24757" (driver="hyperkit")
I0725 12:47:05.136979 32449 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0725 12:47:05.136990 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.137212 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0725 12:47:05.137232 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:05.137327 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:05.137414 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.137506 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:05.137633 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:05.183622 32449 ssh_runner.go:195] Run: cat /etc/os-release
I0725 12:47:05.187736 32449 info.go:137] Remote host: Buildroot 2021.02.12
I0725 12:47:05.187755 32449 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
I0725 12:47:05.187920 32449 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
I0725 12:47:05.188086 32449 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem -> 247572.pem in /etc/ssl/certs
I0725 12:47:05.188293 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0725 12:47:05.195801 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:05.211946 32449 start.go:310] post-start completed in 74.988019ms
I0725 12:47:05.211964 32449 fix.go:57] fixHost completed within 735.355929ms
I0725 12:47:05.211979 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:05.212109 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:05.212191 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.212272 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.212349 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:05.212468 32449 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:05.212573 32449 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.23 22 <nil> <nil>}
I0725 12:47:05.212580 32449 main.go:134] libmachine: About to run SSH command:
date +%s.%N
I0725 12:47:05.291416 32449 main.go:134] libmachine: SSH cmd err, output: <nil>: 1658778425.595874028
I0725 12:47:05.291429 32449 fix.go:207] guest clock: 1658778425.595874028
I0725 12:47:05.291434 32449 fix.go:220] Guest: 2022-07-25 12:47:05.595874028 -0700 PDT Remote: 2022-07-25 12:47:05.211967 -0700 PDT m=+1.290384462 (delta=383.907028ms)
I0725 12:47:05.291453 32449 fix.go:191] guest clock delta is within tolerance: 383.907028ms
I0725 12:47:05.291458 32449 start.go:82] releasing machines lock for "pause-20220725124607-24757", held for 814.879178ms
I0725 12:47:05.291475 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.291595 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetIP
I0725 12:47:05.291689 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.291770 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.291851 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.292123 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.292228 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:05.292347 32449 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0725 12:47:05.292369 32449 ssh_runner.go:195] Run: systemctl --version
I0725 12:47:05.292372 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:05.292385 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:05.292459 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:05.292495 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:05.292549 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.292594 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:05.292623 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:05.292677 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:05.292689 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:05.292759 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:05.452875 32449 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0725 12:47:05.453006 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:05.473402 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:05.473416 32449 docker.go:542] Images already preloaded, skipping extraction
I0725 12:47:05.473475 32449 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0725 12:47:05.483032 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0725 12:47:05.493217 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0725 12:47:05.501982 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0725 12:47:05.513898 32449 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0725 12:47:05.639731 32449 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0725 12:47:05.765174 32449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:05.888323 32449 ssh_runner.go:195] Run: sudo systemctl restart docker
I0725 12:47:26.735641 32449 ssh_runner.go:235] Completed: sudo systemctl restart docker: (20.847699661s)
I0725 12:47:26.735697 32449 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0725 12:47:26.858226 32449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:26.963261 32449 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0725 12:47:26.976260 32449 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0725 12:47:26.976340 32449 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0725 12:47:26.986352 32449 start.go:471] Will wait 60s for crictl version
I0725 12:47:26.986413 32449 ssh_runner.go:195] Run: sudo crictl version
I0725 12:47:27.022115 32449 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0725 12:47:27.022179 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.064171 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.168890 32449 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0725 12:47:27.168984 32449 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0725 12:47:27.171936 32449 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0725 12:47:27.171995 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.196306 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.196318 32449 docker.go:542] Images already preloaded, skipping extraction
I0725 12:47:27.196381 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.220719 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.220737 32449 cache_images.go:84] Images are preloaded, skipping loading
I0725 12:47:27.220897 32449 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0725 12:47:27.264481 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:27.264492 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:27.264506 32449 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0725 12:47:27.264519 32449 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.23 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220725124607-24757 NodeName:pause-20220725124607-24757 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.23 CgroupDriver:systemd ClientCAFile:/var/lib
/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0725 12:47:27.264608 32449 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.23
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220725124607-24757"
kubeletExtraArgs:
node-ip: 192.168.64.23
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.23"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0725 12:47:27.264672 32449 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220725124607-24757 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.23 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0725 12:47:27.264720 32449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0725 12:47:27.270741 32449 binaries.go:44] Found k8s binaries, skipping transfer
I0725 12:47:27.270790 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0725 12:47:27.276381 32449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (489 bytes)
I0725 12:47:27.286602 32449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0725 12:47:27.298589 32449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
I0725 12:47:27.320362 32449 ssh_runner.go:195] Run: grep 192.168.64.23 control-plane.minikube.internal$ /etc/hosts
I0725 12:47:27.327321 32449 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757 for IP: 192.168.64.23
I0725 12:47:27.327422 32449 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
I0725 12:47:27.327476 32449 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
I0725 12:47:27.327554 32449 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.key
I0725 12:47:27.327623 32449 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key.7d9037ca
I0725 12:47:27.327670 32449 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key
I0725 12:47:27.327873 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem (1338 bytes)
W0725 12:47:27.327912 32449 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757_empty.pem, impossibly tiny 0 bytes
I0725 12:47:27.327925 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
I0725 12:47:27.327955 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1078 bytes)
I0725 12:47:27.327988 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
I0725 12:47:27.328016 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1679 bytes)
I0725 12:47:27.328090 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:27.328573 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0725 12:47:27.360725 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0725 12:47:27.387942 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0725 12:47:27.427683 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0725 12:47:27.447934 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0725 12:47:27.464461 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0725 12:47:27.480792 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0725 12:47:27.496689 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0725 12:47:27.512885 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0725 12:47:27.528701 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem --> /usr/share/ca-certificates/24757.pem (1338 bytes)
I0725 12:47:27.547370 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /usr/share/ca-certificates/247572.pem (1708 bytes)
I0725 12:47:27.588968 32449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0725 12:47:27.600771 32449 ssh_runner.go:195] Run: openssl version
I0725 12:47:27.604305 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0725 12:47:27.612006 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616154 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:54 /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616190 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.623374 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0725 12:47:27.637254 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24757.pem && ln -fs /usr/share/ca-certificates/24757.pem /etc/ssl/certs/24757.pem"
I0725 12:47:27.647841 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651408 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:57 /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651458 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24757.pem
I0725 12:47:27.655546 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24757.pem /etc/ssl/certs/51391683.0"
I0725 12:47:27.662604 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247572.pem && ln -fs /usr/share/ca-certificates/247572.pem /etc/ssl/certs/247572.pem"
I0725 12:47:27.670270 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673388 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:57 /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673431 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247572.pem
I0725 12:47:27.682884 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/247572.pem /etc/ssl/certs/3ec20f2e.0"
I0725 12:47:27.697969 32449 kubeadm.go:395] StartCluster: {Name:pause-20220725124607-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:27.698088 32449 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:27.751529 32449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0725 12:47:27.761108 32449 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0725 12:47:27.761129 32449 kubeadm.go:626] restartCluster start
I0725 12:47:27.761186 32449 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0725 12:47:27.797629 32449 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0725 12:47:27.798037 32449 kubeconfig.go:92] found "pause-20220725124607-24757" server: "https://192.168.64.23:8443"
I0725 12:47:27.798425 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:27.799059 32449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0725 12:47:27.805671 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:27.805715 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:27.822205 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.022382 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.022446 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.034958 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.222415 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.222472 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.237243 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.423027 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.423150 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.432259 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.622863 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.622923 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.631111 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.822595 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.822726 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.831432 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.022465 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.022564 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.032625 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.222418 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.222483 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.231030 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.422278 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.422342 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.431624 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.622300 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.622383 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.631597 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.823351 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.823415 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.832384 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.023364 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.023457 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.033811 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.222362 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.222493 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.232730 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.423149 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.423347 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.434769 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.623840 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.623975 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.634089 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.823171 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.823233 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.832208 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.832219 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.832277 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.841016 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.841029 32449 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0725 12:47:30.841038 32449 kubeadm.go:1092] stopping kube-system containers ...
I0725 12:47:30.841091 32449 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:30.863318 32449 docker.go:443] Stopping containers: [8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd]
I0725 12:47:30.863399 32449 ssh_runner.go:195] Run: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd
I0725 12:47:40.471014 32449 ssh_runner.go:235] Completed: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd: (9.607781038s)
I0725 12:47:40.471069 32449 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0725 12:47:40.497793 32449 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0725 12:47:40.504564 32449 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jul 25 19:46 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jul 25 19:46 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2043 Jul 25 19:46 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jul 25 19:46 /etc/kubernetes/scheduler.conf
I0725 12:47:40.504613 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0725 12:47:40.510944 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0725 12:47:40.517741 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0725 12:47:40.523731 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.523765 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0725 12:47:40.529929 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0725 12:47:40.535794 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.535826 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0725 12:47:40.542016 32449 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548372 32449 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548382 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:40.585742 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.045264 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.234558 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.280909 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.330190 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:47:41.330251 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:41.839951 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.339855 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.838994 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.849968 32449 api_server.go:71] duration metric: took 1.519809905s to wait for apiserver process to appear ...
I0725 12:47:42.849984 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:47:42.849997 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:47.258116 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0725 12:47:47.258131 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0725 12:47:47.760362 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:47.766179 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:47.766196 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.258229 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.262225 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:48.262237 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.760315 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.765926 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:47:48.771384 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:47:48.771425 32449 api_server.go:130] duration metric: took 5.921533487s to wait for apiserver health ...
I0725 12:47:48.771438 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:48.771458 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:48.771481 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:47:48.776990 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:47:48.777004 32449 system_pods.go:61] "coredns-6d4b75cb6d-rglh7" [bfdceddb-f0ec-481c-a4a2-ce56bb133d27] Running
I0725 12:47:48.777010 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0725 12:47:48.777018 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0725 12:47:48.777023 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:47:48.777029 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0725 12:47:48.777034 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:47:48.777038 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:47:48.777042 32449 system_pods.go:74] duration metric: took 5.556495ms to wait for pod list to return data ...
I0725 12:47:48.777048 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:47:48.779296 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:47:48.779312 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:47:48.779321 32449 node_conditions.go:105] duration metric: took 2.26989ms to run NodePressure ...
I0725 12:47:48.779331 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:48.896760 32449 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0725 12:47:48.899954 32449 kubeadm.go:777] kubelet initialised
I0725 12:47:48.899964 32449 kubeadm.go:778] duration metric: took 3.186627ms waiting for restarted kubelet to initialise ...
I0725 12:47:48.899971 32449 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:48.903437 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907836 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:48.907844 32449 pod_ready.go:81] duration metric: took 4.397671ms waiting for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907849 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:50.917931 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:53.417934 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:55.419093 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:57.915085 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:58.916472 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:58.916486 32449 pod_ready.go:81] duration metric: took 10.008815507s waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:58.916492 32449 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431517 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.431549 32449 pod_ready.go:81] duration metric: took 515.03489ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431556 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435193 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.435201 32449 pod_ready.go:81] duration metric: took 3.640991ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435208 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438379 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.438387 32449 pod_ready.go:81] duration metric: took 3.174279ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438394 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442279 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.442289 32449 pod_ready.go:81] duration metric: took 3.889821ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442295 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714855 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.714865 32449 pod_ready.go:81] duration metric: took 272.570349ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714870 32449 pod_ready.go:38] duration metric: took 10.815102423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.714885 32449 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0725 12:47:59.722486 32449 ops.go:34] apiserver oom_adj: -16
I0725 12:47:59.722496 32449 kubeadm.go:630] restartCluster took 31.961985619s
I0725 12:47:59.722501 32449 kubeadm.go:397] StartCluster complete in 32.02516291s
I0725 12:47:59.722514 32449 settings.go:142] acquiring lock: {Name:mkd3ca246a72d4c75785a7cc650cfc3c06de2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.722609 32449 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:59.723211 32449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.724153 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.726081 32449 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220725124607-24757" rescaled to 1
I0725 12:47:59.726118 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0725 12:47:59.726114 32449 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0725 12:47:59.726141 32449 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0725 12:47:59.768656 32449 out.go:177] * Verifying Kubernetes components...
I0725 12:47:59.726174 32449 addons.go:65] Setting storage-provisioner=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726175 32449 addons.go:65] Setting default-storageclass=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726311 32449 config.go:178] Loaded profile config "pause-20220725124607-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:47:59.787218 32449 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0725 12:47:59.789703 32449 addons.go:153] Setting addon storage-provisioner=true in "pause-20220725124607-24757"
I0725 12:47:59.789706 32449 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220725124607-24757"
W0725 12:47:59.789715 32449 addons.go:162] addon storage-provisioner should already be in state true
I0725 12:47:59.789742 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:47:59.789751 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.790020 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790041 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790044 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.790058 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.797714 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51473
I0725 12:47:59.798103 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51475
I0725 12:47:59.798272 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798435 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798707 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798727 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798816 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798829 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798980 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799060 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799231 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.799341 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.799436 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.799441 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.799466 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.802301 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.807605 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51477
I0725 12:47:59.808268 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.808662 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.808673 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.808961 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.809100 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.809218 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.809345 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.810223 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.810493 32449 addons.go:153] Setting addon default-storageclass=true in "pause-20220725124607-24757"
I0725 12:47:59.831555 32449 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:59.816767 32449 node_ready.go:35] waiting up to 6m0s for node "pause-20220725124607-24757" to be "Ready" ...
W0725 12:47:59.831555 32449 addons.go:162] addon default-storageclass should already be in state true
I0725 12:47:59.852810 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.852826 32449 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:47:59.852835 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0725 12:47:59.852853 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.853049 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.853161 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.853180 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.853219 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.853264 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.853659 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.861671 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51480
I0725 12:47:59.862254 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.862795 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.862844 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.863107 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.863739 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.863796 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.871393 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51482
I0725 12:47:59.871804 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.872263 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.872295 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.872592 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.872763 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.872884 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.872977 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.874096 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.874327 32449 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.874337 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0725 12:47:59.874346 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.874451 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.874572 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.874685 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.874778 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.915855 32449 node_ready.go:49] node "pause-20220725124607-24757" has status "Ready":"True"
I0725 12:47:59.915866 32449 node_ready.go:38] duration metric: took 63.170605ms waiting for node "pause-20220725124607-24757" to be "Ready" ...
I0725 12:47:59.915875 32449 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.938916 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.970519 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:00.117232 32449 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513514 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.513523 32449 pod_ready.go:81] duration metric: took 396.286746ms waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513529 32449 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.539195 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539210 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539198 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539241 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539416 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539417 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539425 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539420 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539436 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539437 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539457 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539460 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539463 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539466 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539639 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539643 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539654 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539655 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539656 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539667 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539671 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539682 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539820 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539830 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.563090 32449 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0725 12:48:00.621161 32449 addons.go:414] enableAddons completed in 895.045234ms
I0725 12:48:00.914536 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.914568 32449 pod_ready.go:81] duration metric: took 401.042289ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.914575 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315399 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.315410 32449 pod_ready.go:81] duration metric: took 400.837301ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315417 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713652 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.713662 32449 pod_ready.go:81] duration metric: took 398.24262ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713669 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116833 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.116846 32449 pod_ready.go:81] duration metric: took 403.180188ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116857 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514872 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.514885 32449 pod_ready.go:81] duration metric: took 398.015294ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514892 32449 pod_ready.go:38] duration metric: took 2.599056789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:48:02.514914 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:48:02.514971 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:48:02.524797 32449 api_server.go:71] duration metric: took 2.798697005s to wait for apiserver process to appear ...
I0725 12:48:02.524812 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:48:02.524819 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:48:02.528761 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:48:02.529297 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:48:02.529305 32449 api_server.go:130] duration metric: took 4.48935ms to wait for apiserver health ...
I0725 12:48:02.529310 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:48:02.717715 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:48:02.717729 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:02.717733 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:02.717739 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:02.717743 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:02.717746 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:02.717750 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:02.717753 32449 system_pods.go:61] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:02.717757 32449 system_pods.go:74] duration metric: took 188.447508ms to wait for pod list to return data ...
I0725 12:48:02.717768 32449 default_sa.go:34] waiting for default service account to be created ...
I0725 12:48:02.914666 32449 default_sa.go:45] found service account: "default"
I0725 12:48:02.914676 32449 default_sa.go:55] duration metric: took 196.907597ms for default service account to be created ...
I0725 12:48:02.914681 32449 system_pods.go:116] waiting for k8s-apps to be running ...
I0725 12:48:03.116281 32449 system_pods.go:86] 7 kube-system pods found
I0725 12:48:03.116295 32449 system_pods.go:89] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:03.116300 32449 system_pods.go:89] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:03.116304 32449 system_pods.go:89] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:03.116307 32449 system_pods.go:89] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:03.116311 32449 system_pods.go:89] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:03.116314 32449 system_pods.go:89] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:03.116319 32449 system_pods.go:89] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:03.116334 32449 system_pods.go:126] duration metric: took 201.650654ms to wait for k8s-apps to be running ...
I0725 12:48:03.116348 32449 system_svc.go:44] waiting for kubelet service to be running ....
I0725 12:48:03.116413 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:48:03.126610 32449 system_svc.go:56] duration metric: took 10.263673ms WaitForService to wait for kubelet.
I0725 12:48:03.126626 32449 kubeadm.go:572] duration metric: took 3.400540205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0725 12:48:03.126644 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:03.314389 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:48:03.314403 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:03.314410 32449 node_conditions.go:105] duration metric: took 187.766416ms to run NodePressure ...
I0725 12:48:03.314435 32449 start.go:216] waiting for startup goroutines ...
I0725 12:48:03.348116 32449 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
I0725 12:48:03.423611 32449 out.go:177] * Done! kubectl is now configured to use "pause-20220725124607-24757" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725124607-24757 -n pause-20220725124607-24757
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-20220725124607-24757 logs -n 25
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220725124607-24757 logs -n 25: (3.13281775s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | 25 Jul 22 12:39 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | 25 Jul 22 12:40 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| delete | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:40 PDT | 25 Jul 22 12:40 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| start | -p | skaffold-20220725124025-24757 | jenkins | v1.26.0 | 25 Jul 22 12:40 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| | --memory=2600 | | | | | |
| | --driver=hyperkit | | | | | |
| docker-env | --shell none -p | skaffold-20220725124025-24757 | skaffold | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p | skaffold-20220725124025-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| start | -p | offline-docker-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:43 PDT |
| | offline-docker-20220725124139-24757 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:42 PDT |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:42 PDT |
| | pgrep -a kubelet | | | | | |
| delete | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:42 PDT |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | offline-docker-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:43 PDT | 25 Jul 22 12:43 PDT |
| | offline-docker-20220725124139-24757 | | | | | |
| stop | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:45 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | stopped-upgrade-20220725124328-24757 | jenkins | v1.26.0 | 25 Jul 22 12:45 PDT | 25 Jul 22 12:46 PDT |
| | stopped-upgrade-20220725124328-24757 | | | | | |
| | --memory=2200 --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:45 PDT | 25 Jul 22 12:45 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| delete | -p | stopped-upgrade-20220725124328-24757 | jenkins | v1.26.0 | 25 Jul 22 12:46 PDT | 25 Jul 22 12:46 PDT |
| | stopped-upgrade-20220725124328-24757 | | | | | |
| start | -p pause-20220725124607-24757 | pause-20220725124607-24757 | jenkins | v1.26.0 | 25 Jul 22 12:46 PDT | 25 Jul 22 12:47 PDT |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p pause-20220725124607-24757 | pause-20220725124607-24757 | jenkins | v1.26.0 | 25 Jul 22 12:47 PDT | 25 Jul 22 12:48 PDT |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | running-upgrade-20220725124546-24757 | jenkins | v1.26.0 | 25 Jul 22 12:47 PDT | 25 Jul 22 12:48 PDT |
| | running-upgrade-20220725124546-24757 | | | | | |
| | --memory=2200 --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | running-upgrade-20220725124546-24757 | jenkins | v1.26.0 | 25 Jul 22 12:48 PDT | |
| | running-upgrade-20220725124546-24757 | | | | | |
|------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/25 12:47:16
Running on machine: MacOS-Agent-1
Binary: Built with gc go1.18.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0725 12:47:16.479890 32469 out.go:296] Setting OutFile to fd 1 ...
I0725 12:47:16.480510 32469 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:16.480517 32469 out.go:309] Setting ErrFile to fd 2...
I0725 12:47:16.480525 32469 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:16.480773 32469 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
I0725 12:47:16.481797 32469 out.go:303] Setting JSON to false
I0725 12:47:16.498142 32469 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10009,"bootTime":1658768427,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0725 12:47:16.498280 32469 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0725 12:47:16.537347 32469 out.go:177] * [running-upgrade-20220725124546-24757] minikube v1.26.0 on Darwin 12.4
I0725 12:47:16.573253 32469 notify.go:193] Checking for updates...
I0725 12:47:16.610977 32469 out.go:177] - MINIKUBE_LOCATION=14555
I0725 12:47:16.687221 32469 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:16.763200 32469 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0725 12:47:16.822085 32469 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0725 12:47:16.865985 32469 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
I0725 12:47:16.887763 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:47:16.887795 32469 start_flags.go:627] config upgrade: Driver=hyperkit
I0725 12:47:16.887807 32469 start_flags.go:639] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
I0725 12:47:16.887931 32469 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/config.json ...
I0725 12:47:16.889381 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:16.889436 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:16.896256 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51439
I0725 12:47:16.896612 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:16.896994 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:16.897005 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:16.897207 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:16.897330 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:16.919047 32469 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
I0725 12:47:16.940078 32469 driver.go:365] Setting default libvirt URI to qemu:///system
I0725 12:47:16.940624 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:16.940682 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:16.948084 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51441
I0725 12:47:16.948480 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:16.948812 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:16.948823 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:16.949041 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:16.949126 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:16.975911 32469 out.go:177] * Using the hyperkit driver based on existing profile
I0725 12:47:16.997031 32469 start.go:284] selected driver: hyperkit
I0725 12:47:16.997054 32469 start.go:808] validating driver "hyperkit" against &{Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{Kuber
netesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:16.997197 32469 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0725 12:47:16.999295 32469 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:16.999403 32469 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0725 12:47:17.005471 32469 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.26.0
I0725 12:47:17.008449 32469 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:17.008472 32469 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0725 12:47:17.008544 32469 cni.go:95] Creating CNI manager for ""
I0725 12:47:17.008554 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:17.008567 32469 start_flags.go:310] config:
{Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:17.008691 32469 iso.go:128] acquiring lock: {Name:mk75e62a3ceeaef3aefa2a3a9c617c6e59d820a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.030213 32469 out.go:177] * Starting control plane node running-upgrade-20220725124546-24757 in cluster running-upgrade-20220725124546-24757
I0725 12:47:17.052085 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W0725 12:47:17.129419 32469 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0725 12:47:17.129570 32469 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/config.json ...
I0725 12:47:17.129705 32469 cache.go:107] acquiring lock: {Name:mkc10c9c66e179cd4a0dc6e8fa7072246b41ed8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129709 32469 cache.go:107] acquiring lock: {Name:mk17fee4f7d14c3244831bbcf83d4048b5bf85ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129750 32469 cache.go:107] acquiring lock: {Name:mk3a8071de70e33fc08172e48377685e9806cd28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129908 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
I0725 12:47:17.129808 32469 cache.go:107] acquiring lock: {Name:mk48edca73ba098a628de4d6b84f553475ca8419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129944 32469 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 260.61µs
I0725 12:47:17.129954 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
I0725 12:47:17.129972 32469 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
I0725 12:47:17.129955 32469 cache.go:107] acquiring lock: {Name:mk9d3d9189d65cdbe444cdf74de19f91817d64ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129972 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0725 12:47:17.129993 32469 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 284.206µs
I0725 12:47:17.130026 32469 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 337.2µs
I0725 12:47:17.130058 32469 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
I0725 12:47:17.130045 32469 cache.go:107] acquiring lock: {Name:mk56000c8091f0f3f746944023388a5d091f1f39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130069 32469 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0725 12:47:17.130001 32469 cache.go:107] acquiring lock: {Name:mk0254551fd10ae756e3fd2ab6128ea499634bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130143 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
I0725 12:47:17.129993 32469 cache.go:107] acquiring lock: {Name:mka4d2d18f2170bd8ec63c8694b1dcb2ae884cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130161 32469 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 217.443µs
I0725 12:47:17.130194 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
I0725 12:47:17.130122 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
I0725 12:47:17.130217 32469 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 274.811µs
I0725 12:47:17.130212 32469 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
I0725 12:47:17.130234 32469 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
I0725 12:47:17.130249 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
I0725 12:47:17.130248 32469 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 468.611µs
I0725 12:47:17.130257 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
I0725 12:47:17.130279 32469 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
I0725 12:47:17.130282 32469 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 304.845µs
I0725 12:47:17.130284 32469 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 397.407µs
I0725 12:47:17.130294 32469 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
I0725 12:47:17.130303 32469 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0725 12:47:17.130317 32469 cache.go:87] Successfully saved all images to host disk.
I0725 12:47:17.130441 32469 cache.go:208] Successfully downloaded all kic artifacts
I0725 12:47:17.130487 32469 start.go:370] acquiring machines lock for running-upgrade-20220725124546-24757: {Name:mk6dd10c27893192a420c40bba76224953275f58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0725 12:47:17.130563 32469 start.go:374] acquired machines lock for "running-upgrade-20220725124546-24757" in 59.066µs
I0725 12:47:17.130591 32469 start.go:95] Skipping create...Using existing machine configuration
I0725 12:47:17.130608 32469 fix.go:55] fixHost starting: minikube
I0725 12:47:17.131033 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:17.131062 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:17.137985 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51443
I0725 12:47:17.138349 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:17.138653 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:17.138663 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:17.138899 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:17.139014 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.139093 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:47:17.139180 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:17.139253 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:47:17.139989 32469 fix.go:103] recreateIfNeeded on running-upgrade-20220725124546-24757: state=Running err=<nil>
W0725 12:47:17.140003 32469 fix.go:129] unexpected machine state, will restart: <nil>
I0725 12:47:17.183090 32469 out.go:177] * Updating the running hyperkit "running-upgrade-20220725124546-24757" VM ...
I0725 12:47:17.220993 32469 machine.go:88] provisioning docker machine ...
I0725 12:47:17.221027 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.221327 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.221510 32469 buildroot.go:166] provisioning hostname "running-upgrade-20220725124546-24757"
I0725 12:47:17.221536 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.221710 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.221900 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.222112 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.222273 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.222398 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.222577 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.222794 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.222807 32469 main.go:134] libmachine: About to run SSH command:
sudo hostname running-upgrade-20220725124546-24757 && echo "running-upgrade-20220725124546-24757" | sudo tee /etc/hostname
I0725 12:47:17.294480 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220725124546-24757
I0725 12:47:17.294498 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.294635 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.294737 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.294834 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.294962 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.295088 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.295210 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.295222 32469 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-20220725124546-24757' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220725124546-24757/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-20220725124546-24757' | sudo tee -a /etc/hosts;
fi
fi
I0725 12:47:17.360741 32469 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0725 12:47:17.360770 32469 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemo
tePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
I0725 12:47:17.360784 32469 buildroot.go:174] setting up certificates
I0725 12:47:17.360793 32469 provision.go:83] configureAuth start
I0725 12:47:17.360800 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.360920 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetIP
I0725 12:47:17.361011 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.361090 32469 provision.go:138] copyHostCerts
I0725 12:47:17.361156 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
I0725 12:47:17.361164 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
I0725 12:47:17.361279 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1078 bytes)
I0725 12:47:17.361459 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
I0725 12:47:17.361465 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
I0725 12:47:17.361527 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
I0725 12:47:17.361649 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
I0725 12:47:17.361655 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
I0725 12:47:17.361716 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1679 bytes)
I0725 12:47:17.361827 32469 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220725124546-24757 san=[192.168.64.22 192.168.64.22 localhost 127.0.0.1 minikube running-upgrade-20220725124546-24757]
I0725 12:47:17.441821 32469 provision.go:172] copyRemoteCerts
I0725 12:47:17.441875 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0725 12:47:17.441892 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.442065 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.442221 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.442400 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.442647 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:17.479745 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0725 12:47:17.488907 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0725 12:47:17.497728 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0725 12:47:17.506911 32469 provision.go:86] duration metric: configureAuth took 146.110362ms
I0725 12:47:17.506928 32469 buildroot.go:189] setting minikube options for container-runtime
I0725 12:47:17.507036 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:47:17.507048 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.507168 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.507259 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.507337 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.507412 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.507494 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.507590 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.507685 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.507693 32469 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0725 12:47:17.572899 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0725 12:47:17.572915 32469 buildroot.go:70] root file system type: tmpfs
I0725 12:47:17.573048 32469 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0725 12:47:17.573066 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.573194 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.573294 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.573383 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.573484 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.573610 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.573718 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.573767 32469 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0725 12:47:17.644436 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0725 12:47:17.644460 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.644585 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.644687 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.644778 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.644883 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.645028 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.645136 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.645150 32469 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0725 12:47:26.735641 32449 ssh_runner.go:235] Completed: sudo systemctl restart docker: (20.847699661s)
I0725 12:47:26.735697 32449 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0725 12:47:26.858226 32449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:26.963261 32449 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0725 12:47:26.976260 32449 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0725 12:47:26.976340 32449 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0725 12:47:26.986352 32449 start.go:471] Will wait 60s for crictl version
I0725 12:47:26.986413 32449 ssh_runner.go:195] Run: sudo crictl version
I0725 12:47:27.022115 32449 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0725 12:47:27.022179 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.064171 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.168890 32449 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0725 12:47:27.168984 32449 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0725 12:47:27.171936 32449 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0725 12:47:27.171995 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.196306 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.196318 32449 docker.go:542] Images already preloaded, skipping extraction
I0725 12:47:27.196381 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.220719 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.220737 32449 cache_images.go:84] Images are preloaded, skipping loading
I0725 12:47:27.220897 32449 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0725 12:47:27.264481 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:27.264492 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:27.264506 32449 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0725 12:47:27.264519 32449 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.23 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220725124607-24757 NodeName:pause-20220725124607-24757 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.23 CgroupDriver:systemd ClientCAFile:/var/lib
/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0725 12:47:27.264608 32449 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.23
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220725124607-24757"
kubeletExtraArgs:
node-ip: 192.168.64.23
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.23"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0725 12:47:27.264672 32449 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220725124607-24757 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.23 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0725 12:47:27.264720 32449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0725 12:47:27.270741 32449 binaries.go:44] Found k8s binaries, skipping transfer
I0725 12:47:27.270790 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0725 12:47:27.276381 32449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (489 bytes)
I0725 12:47:27.286602 32449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0725 12:47:27.298589 32449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
I0725 12:47:27.320362 32449 ssh_runner.go:195] Run: grep 192.168.64.23 control-plane.minikube.internal$ /etc/hosts
I0725 12:47:27.327321 32449 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757 for IP: 192.168.64.23
I0725 12:47:27.327422 32449 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
I0725 12:47:27.327476 32449 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
I0725 12:47:27.327554 32449 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.key
I0725 12:47:27.327623 32449 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key.7d9037ca
I0725 12:47:27.327670 32449 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key
I0725 12:47:27.327873 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem (1338 bytes)
W0725 12:47:27.327912 32449 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757_empty.pem, impossibly tiny 0 bytes
I0725 12:47:27.327925 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
I0725 12:47:27.327955 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1078 bytes)
I0725 12:47:27.327988 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
I0725 12:47:27.328016 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1679 bytes)
I0725 12:47:27.328090 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:27.328573 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0725 12:47:27.360725 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0725 12:47:27.387942 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0725 12:47:27.427683 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0725 12:47:27.447934 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0725 12:47:27.464461 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0725 12:47:27.480792 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0725 12:47:27.496689 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0725 12:47:27.512885 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0725 12:47:27.528701 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem --> /usr/share/ca-certificates/24757.pem (1338 bytes)
I0725 12:47:27.547370 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /usr/share/ca-certificates/247572.pem (1708 bytes)
I0725 12:47:27.588968 32449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0725 12:47:27.600771 32449 ssh_runner.go:195] Run: openssl version
I0725 12:47:27.604305 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0725 12:47:27.612006 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616154 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:54 /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616190 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.623374 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0725 12:47:27.637254 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24757.pem && ln -fs /usr/share/ca-certificates/24757.pem /etc/ssl/certs/24757.pem"
I0725 12:47:27.647841 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651408 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:57 /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651458 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24757.pem
I0725 12:47:27.655546 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24757.pem /etc/ssl/certs/51391683.0"
I0725 12:47:27.662604 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247572.pem && ln -fs /usr/share/ca-certificates/247572.pem /etc/ssl/certs/247572.pem"
I0725 12:47:27.670270 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673388 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:57 /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673431 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247572.pem
I0725 12:47:27.682884 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/247572.pem /etc/ssl/certs/3ec20f2e.0"
I0725 12:47:27.697969 32449 kubeadm.go:395] StartCluster: {Name:pause-20220725124607-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:27.698088 32449 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:27.751529 32449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0725 12:47:27.761108 32449 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0725 12:47:27.761129 32449 kubeadm.go:626] restartCluster start
I0725 12:47:27.761186 32449 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0725 12:47:27.797629 32449 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0725 12:47:27.798037 32449 kubeconfig.go:92] found "pause-20220725124607-24757" server: "https://192.168.64.23:8443"
I0725 12:47:27.798425 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:27.799059 32449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0725 12:47:27.805671 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:27.805715 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:27.822205 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.022382 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.022446 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.034958 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.222415 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.222472 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.237243 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.423027 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.423150 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.432259 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.622863 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.622923 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.631111 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.822595 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.822726 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.831432 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.277408 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
+++ /lib/systemd/system/docker.service.new
@@ -3,9 +3,12 @@
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
+Restart=on-failure
@@ -21,7 +24,7 @@
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0725 12:47:29.277420 32469 machine.go:91] provisioned docker machine in 12.056643231s
I0725 12:47:29.277434 32469 start.go:307] post-start starting for "running-upgrade-20220725124546-24757" (driver="hyperkit")
I0725 12:47:29.277440 32469 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0725 12:47:29.277451 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.277626 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0725 12:47:29.277638 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.277741 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.277814 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.277929 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.278009 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.314489 32469 ssh_runner.go:195] Run: cat /etc/os-release
I0725 12:47:29.317079 32469 info.go:137] Remote host: Buildroot 2019.02.7
I0725 12:47:29.317093 32469 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
I0725 12:47:29.317198 32469 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
I0725 12:47:29.317334 32469 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem -> 247572.pem in /etc/ssl/certs
I0725 12:47:29.317487 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0725 12:47:29.321269 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:29.330125 32469 start.go:310] post-start completed in 52.683944ms
I0725 12:47:29.330138 32469 fix.go:57] fixHost completed within 12.199776264s
I0725 12:47:29.330151 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.330290 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.330404 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.330506 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.330607 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.330724 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:29.330829 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:29.330836 32469 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0725 12:47:29.398680 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: 1658778449.704738852
I0725 12:47:29.398690 32469 fix.go:207] guest clock: 1658778449.704738852
I0725 12:47:29.398695 32469 fix.go:220] Guest: 2022-07-25 12:47:29.704738852 -0700 PDT Remote: 2022-07-25 12:47:29.33014 -0700 PDT m=+12.897586139 (delta=374.598852ms)
I0725 12:47:29.398714 32469 fix.go:191] guest clock delta is within tolerance: 374.598852ms
I0725 12:47:29.398718 32469 start.go:82] releasing machines lock for "running-upgrade-20220725124546-24757", held for 12.268384436s
I0725 12:47:29.398736 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.398865 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetIP
I0725 12:47:29.398966 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399075 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399189 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399504 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399599 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399661 32469 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0725 12:47:29.399688 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.399747 32469 ssh_runner.go:195] Run: systemctl --version
I0725 12:47:29.399760 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.399768 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.399847 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.399876 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.399953 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.399954 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.400030 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.400052 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.400127 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.433425 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I0725 12:47:29.433488 32469 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0725 12:47:29.596262 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0725 12:47:29.604094 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0725 12:47:29.610719 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0725 12:47:29.618917 32469 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0725 12:47:29.681154 32469 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0725 12:47:29.743519 32469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:29.811526 32469 ssh_runner.go:195] Run: sudo systemctl restart docker
I0725 12:47:31.054001 32469 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.242480525s)
I0725 12:47:31.054059 32469 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:31.084519 32469 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:31.154286 32469 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
I0725 12:47:31.154426 32469 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0725 12:47:31.158337 32469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0725 12:47:31.164342 32469 localpath.go:92] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/client.crt -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt
I0725 12:47:31.164602 32469 localpath.go:117] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/client.key -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.key
I0725 12:47:31.164878 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I0725 12:47:31.164922 32469 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:31.187082 32469 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.17.0
k8s.gcr.io/kube-controller-manager:v1.17.0
k8s.gcr.io/kube-apiserver:v1.17.0
k8s.gcr.io/kube-scheduler:v1.17.0
kubernetesui/dashboard:v2.0.0-beta8
k8s.gcr.io/coredns:1.6.5
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/kube-addon-manager:v9.0.2
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0725 12:47:31.187094 32469 docker.go:617] gcr.io/k8s-minikube/storage-provisioner:v5 wasn't preloaded
I0725 12:47:31.187102 32469 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
I0725 12:47:31.193719 32469 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.194122 32469 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.194532 32469 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.194866 32469 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.195101 32469 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.195598 32469 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
I0725 12:47:31.195961 32469 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
I0725 12:47:31.196248 32469 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.200862 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.202444 32469 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.202458 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.202580 32469 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.203086 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.203751 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.203755 32469 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
I0725 12:47:31.203879 32469 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
I0725 12:47:29.022465 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.022564 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.032625 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.222418 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.222483 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.231030 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.422278 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.422342 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.431624 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.622300 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.622383 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.631597 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.823351 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.823415 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.832384 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.023364 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.023457 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.033811 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.222362 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.222493 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.232730 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.423149 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.423347 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.434769 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.623840 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.623975 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.634089 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.823171 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.823233 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.832208 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.832219 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.832277 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.841016 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.841029 32449 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0725 12:47:30.841038 32449 kubeadm.go:1092] stopping kube-system containers ...
I0725 12:47:30.841091 32449 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:30.863318 32449 docker.go:443] Stopping containers: [8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd]
I0725 12:47:30.863399 32449 ssh_runner.go:195] Run: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd
I0725 12:47:31.741999 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.742484 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.757587 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.793489 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.847759 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.890670 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
I0725 12:47:31.892570 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.917394 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
I0725 12:47:31.920149 32469 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0725 12:47:31.920179 32469 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.920217 32469 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.944492 32469 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0725 12:47:31.944604 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0725 12:47:31.947255 32469 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0725 12:47:31.947274 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0725 12:47:31.984214 32469 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0725 12:47:31.984231 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I0725 12:47:32.434491 32469 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0725 12:47:32.434522 32469 cache_images.go:123] Successfully loaded all cached images
I0725 12:47:32.434526 32469 cache_images.go:92] LoadImages completed in 1.247441451s
I0725 12:47:32.434591 32469 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0725 12:47:32.463243 32469 cni.go:95] Creating CNI manager for ""
I0725 12:47:32.463254 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:32.463267 32469 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0725 12:47:32.463281 32469 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.22 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220725124546-24757 NodeName:running-upgrade-20220725124546-24757 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.22 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0725 12:47:32.463377 32469 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.22
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "running-upgrade-20220725124546-24757"
kubeletExtraArgs:
node-ip: 192.168.64.22
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.22"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0725 12:47:32.463433 32469 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=running-upgrade-20220725124546-24757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.22
[Install]
config:
{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0725 12:47:32.463470 32469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
I0725 12:47:32.467639 32469 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
Initiating transfer...
I0725 12:47:32.467680 32469 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
I0725 12:47:32.472165 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256
I0725 12:47:32.472175 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256
I0725 12:47:32.472169 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
I0725 12:47:32.472210 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:47:32.472263 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl
I0725 12:47:32.472269 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubeadm
I0725 12:47:32.475825 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
I0725 12:47:32.475922 32469 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
I0725 12:47:32.475936 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
I0725 12:47:32.492511 32469 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
I0725 12:47:32.640658 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubelet
I0725 12:47:32.765279 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
I0725 12:47:33.643113 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0725 12:47:33.647421 32469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I0725 12:47:33.654434 32469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0725 12:47:33.661705 32469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes)
I0725 12:47:33.668711 32469 ssh_runner.go:195] Run: grep 192.168.64.22 control-plane.minikube.internal$ /etc/hosts
I0725 12:47:33.671596 32469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.22 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0725 12:47:33.677578 32469 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles for IP: 192.168.64.22
I0725 12:47:33.677759 32469 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
I0725 12:47:33.677842 32469 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
I0725 12:47:33.677936 32469 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/client.key
I0725 12:47:33.677962 32469 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd
I0725 12:47:33.677981 32469 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd with IP's: [192.168.64.22 10.96.0.1 127.0.0.1 10.0.0.1]
I0725 12:47:33.842821 32469 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd ...
I0725 12:47:33.842838 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd: {Name:mk89e1dc262be7bd639c97350ec09a1a385b9a32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:33.843138 32469 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd ...
I0725 12:47:33.843146 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd: {Name:mk04002e8104c502ea4395fb47fabe2ccb2a61c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:33.843337 32469 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt
I0725 12:47:33.843524 32469 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key
I0725 12:47:33.843740 32469 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.key
I0725 12:47:33.843922 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem (1338 bytes)
W0725 12:47:33.843961 32469 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757_empty.pem, impossibly tiny 0 bytes
I0725 12:47:33.843971 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
I0725 12:47:33.844003 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1078 bytes)
I0725 12:47:33.844032 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
I0725 12:47:33.844059 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1679 bytes)
I0725 12:47:33.844126 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:33.844650 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0725 12:47:33.854538 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0725 12:47:33.863826 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0725 12:47:33.873420 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0725 12:47:33.882368 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0725 12:47:33.891411 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0725 12:47:33.901467 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0725 12:47:33.910211 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0725 12:47:33.919862 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0725 12:47:33.929345 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem --> /usr/share/ca-certificates/24757.pem (1338 bytes)
I0725 12:47:33.938668 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /usr/share/ca-certificates/247572.pem (1708 bytes)
I0725 12:47:33.947792 32469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
I0725 12:47:33.954304 32469 ssh_runner.go:195] Run: openssl version
I0725 12:47:33.957730 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0725 12:47:33.962477 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.965403 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:54 /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.965444 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.973126 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0725 12:47:33.977142 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24757.pem && ln -fs /usr/share/ca-certificates/24757.pem /etc/ssl/certs/24757.pem"
I0725 12:47:33.982074 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24757.pem
I0725 12:47:33.984977 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:57 /usr/share/ca-certificates/24757.pem
I0725 12:47:33.985020 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24757.pem
I0725 12:47:33.992835 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24757.pem /etc/ssl/certs/51391683.0"
I0725 12:47:33.997177 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247572.pem && ln -fs /usr/share/ca-certificates/247572.pem /etc/ssl/certs/247572.pem"
I0725 12:47:34.002260 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247572.pem
I0725 12:47:34.005351 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:57 /usr/share/ca-certificates/247572.pem
I0725 12:47:34.005391 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247572.pem
I0725 12:47:34.013100 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/247572.pem /etc/ssl/certs/3ec20f2e.0"
I0725 12:47:34.017828 32469 kubeadm.go:395] StartCluster: {Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 Kubernet
esConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:34.017910 32469 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:34.038771 32469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0725 12:47:34.043395 32469 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0725 12:47:34.043428 32469 kubeadm.go:626] restartCluster start
I0725 12:47:34.043470 32469 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0725 12:47:34.047753 32469 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.048162 32469 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220725124546-24757" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:34.048333 32469 kubeconfig.go:127] "running-upgrade-20220725124546-24757" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
I0725 12:47:34.048691 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:34.049575 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:34.050045 32469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0725 12:47:34.054115 32469 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -1,4 +1,4 @@
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.22
@@ -12,32 +12,63 @@
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
- name: minikube
+ name: "running-upgrade-20220725124546-24757"
+ kubeletExtraArgs:
+ node-ip: 192.168.64.22
taints: []
---
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
+ certSANs: ["127.0.0.1", "localhost", "192.168.64.22"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+controllerManager:
+ extraArgs:
+ allocate-node-cidrs: "true"
+ leader-elect: "false"
+scheduler:
+ extraArgs:
+ leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
-clusterName: kubernetes
-controlPlaneEndpoint: localhost:8443
+clusterName: mk
+controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
+ extraArgs:
+ proxy-refresh-interval: "70000"
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
- podSubnet: ""
+ podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
+authentication:
+ x509:
+ clientCAFile: /var/lib/minikube/certs/ca.crt
+cgroupDriver: cgroupfs
+clusterDomain: "cluster.local"
+# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
+failSwapOn: false
+staticPodPath: /etc/kubernetes/manifests
+---
+apiVersion: kubeproxy.config.k8s.io/v1alpha1
+kind: KubeProxyConfiguration
+clusterCIDR: "10.244.0.0/16"
+metricsBindAddress: 0.0.0.0:10249
+conntrack:
+ maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0725 12:47:34.054129 32469 kubeadm.go:1092] stopping kube-system containers ...
I0725 12:47:34.054189 32469 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:34.076347 32469 docker.go:443] Stopping containers: [4367743c93ba 02c56a28708a 557def1c077d a38b11b910ca faeff5a354ad fda86db631da 8f3fe5d92c6b 9760fca15e21 d3200f5d0b91 7f0c019b74b5]
I0725 12:47:34.076413 32469 ssh_runner.go:195] Run: docker stop 4367743c93ba 02c56a28708a 557def1c077d a38b11b910ca faeff5a354ad fda86db631da 8f3fe5d92c6b 9760fca15e21 d3200f5d0b91 7f0c019b74b5
I0725 12:47:34.098775 32469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0725 12:47:34.105809 32469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0725 12:47:34.110083 32469 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5625 Jul 25 19:46 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jul 25 19:46 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1981 Jul 25 19:47 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5605 Jul 25 19:46 /etc/kubernetes/scheduler.conf
I0725 12:47:34.110192 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0725 12:47:34.114184 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.114262 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0725 12:47:34.118455 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0725 12:47:34.122212 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.122248 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0725 12:47:34.126143 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0725 12:47:34.129958 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.129994 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0725 12:47:34.134006 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0725 12:47:34.137832 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.137876 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0725 12:47:34.141750 32469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0725 12:47:34.146202 32469 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0725 12:47:34.146212 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:34.188027 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.124387 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.272929 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.353057 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.418959 32469 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:47:35.419055 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:35.930921 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:36.429686 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:36.929679 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:37.431210 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:37.929517 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:38.430454 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:38.929930 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:39.429482 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:39.929596 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.431467 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.929623 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:41.429598 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.471014 32449 ssh_runner.go:235] Completed: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd: (9.607781038s)
I0725 12:47:40.471069 32449 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0725 12:47:40.497793 32449 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0725 12:47:40.504564 32449 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jul 25 19:46 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jul 25 19:46 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2043 Jul 25 19:46 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jul 25 19:46 /etc/kubernetes/scheduler.conf
I0725 12:47:40.504613 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0725 12:47:40.510944 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0725 12:47:40.517741 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0725 12:47:40.523731 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.523765 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0725 12:47:40.529929 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0725 12:47:40.535794 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.535826 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0725 12:47:40.542016 32449 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548372 32449 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548382 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:40.585742 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.045264 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.234558 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.280909 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.330190 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:47:41.330251 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:41.839951 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.339855 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.838994 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.849968 32449 api_server.go:71] duration metric: took 1.519809905s to wait for apiserver process to appear ...
I0725 12:47:42.849984 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:47:42.849997 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:41.929449 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.429466 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.930789 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:43.431399 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:43.929565 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:44.431573 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:44.931575 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:45.429633 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:45.929405 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:46.429677 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.258116 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0725 12:47:47.258131 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0725 12:47:47.760362 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:47.766179 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:47.766196 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.258229 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.262225 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:48.262237 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.760315 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.765926 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:47:48.771384 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:47:48.771425 32449 api_server.go:130] duration metric: took 5.921533487s to wait for apiserver health ...
I0725 12:47:48.771438 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:48.771458 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:48.771481 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:47:48.776990 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:47:48.777004 32449 system_pods.go:61] "coredns-6d4b75cb6d-rglh7" [bfdceddb-f0ec-481c-a4a2-ce56bb133d27] Running
I0725 12:47:48.777010 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0725 12:47:48.777018 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0725 12:47:48.777023 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:47:48.777029 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0725 12:47:48.777034 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:47:48.777038 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:47:48.777042 32449 system_pods.go:74] duration metric: took 5.556495ms to wait for pod list to return data ...
I0725 12:47:48.777048 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:47:48.779296 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:47:48.779312 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:47:48.779321 32449 node_conditions.go:105] duration metric: took 2.26989ms to run NodePressure ...
I0725 12:47:48.779331 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:48.896760 32449 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0725 12:47:48.899954 32449 kubeadm.go:777] kubelet initialised
I0725 12:47:48.899964 32449 kubeadm.go:778] duration metric: took 3.186627ms waiting for restarted kubelet to initialise ...
I0725 12:47:48.899971 32449 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:48.903437 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907836 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:48.907844 32449 pod_ready.go:81] duration metric: took 4.397671ms waiting for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907849 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:46.929504 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.431501 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.931464 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:48.430664 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:48.929375 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:49.429346 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:49.929655 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.430437 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.929624 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:51.429372 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.917931 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:53.417934 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:51.930509 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:52.430439 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:52.930290 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:53.430033 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:53.931374 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:54.430420 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:54.931336 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.429562 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.929483 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:56.429155 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.419093 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:57.915085 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:58.916472 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:58.916486 32449 pod_ready.go:81] duration metric: took 10.008815507s waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:58.916492 32449 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431517 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.431549 32449 pod_ready.go:81] duration metric: took 515.03489ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431556 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435193 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.435201 32449 pod_ready.go:81] duration metric: took 3.640991ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435208 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438379 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.438387 32449 pod_ready.go:81] duration metric: took 3.174279ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438394 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442279 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.442289 32449 pod_ready.go:81] duration metric: took 3.889821ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442295 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714855 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.714865 32449 pod_ready.go:81] duration metric: took 272.570349ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714870 32449 pod_ready.go:38] duration metric: took 10.815102423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.714885 32449 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0725 12:47:59.722486 32449 ops.go:34] apiserver oom_adj: -16
I0725 12:47:59.722496 32449 kubeadm.go:630] restartCluster took 31.961985619s
I0725 12:47:59.722501 32449 kubeadm.go:397] StartCluster complete in 32.02516291s
I0725 12:47:59.722514 32449 settings.go:142] acquiring lock: {Name:mkd3ca246a72d4c75785a7cc650cfc3c06de2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.722609 32449 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:59.723211 32449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.724153 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.726081 32449 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220725124607-24757" rescaled to 1
I0725 12:47:59.726118 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0725 12:47:59.726114 32449 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0725 12:47:59.726141 32449 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0725 12:47:59.768656 32449 out.go:177] * Verifying Kubernetes components...
I0725 12:47:59.726174 32449 addons.go:65] Setting storage-provisioner=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726175 32449 addons.go:65] Setting default-storageclass=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726311 32449 config.go:178] Loaded profile config "pause-20220725124607-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:47:59.787218 32449 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0725 12:47:59.789703 32449 addons.go:153] Setting addon storage-provisioner=true in "pause-20220725124607-24757"
I0725 12:47:59.789706 32449 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220725124607-24757"
W0725 12:47:59.789715 32449 addons.go:162] addon storage-provisioner should already be in state true
I0725 12:47:59.789742 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:47:59.789751 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.790020 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790041 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790044 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.790058 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.797714 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51473
I0725 12:47:59.798103 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51475
I0725 12:47:59.798272 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798435 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798707 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798727 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798816 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798829 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798980 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799060 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799231 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.799341 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.799436 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.799441 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.799466 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.802301 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.807605 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51477
I0725 12:47:59.808268 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.808662 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.808673 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.808961 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.809100 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.809218 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.809345 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.810223 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.810493 32449 addons.go:153] Setting addon default-storageclass=true in "pause-20220725124607-24757"
I0725 12:47:59.831555 32449 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:59.816767 32449 node_ready.go:35] waiting up to 6m0s for node "pause-20220725124607-24757" to be "Ready" ...
W0725 12:47:59.831555 32449 addons.go:162] addon default-storageclass should already be in state true
I0725 12:47:59.852810 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.852826 32449 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:47:59.852835 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0725 12:47:59.852853 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.853049 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.853161 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.853180 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.853219 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.853264 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.853659 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.861671 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51480
I0725 12:47:59.862254 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.862795 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.862844 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.863107 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.863739 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.863796 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.871393 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51482
I0725 12:47:59.871804 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.872263 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.872295 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.872592 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.872763 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.872884 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.872977 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.874096 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.874327 32449 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.874337 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0725 12:47:59.874346 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.874451 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.874572 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.874685 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.874778 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.915855 32449 node_ready.go:49] node "pause-20220725124607-24757" has status "Ready":"True"
I0725 12:47:59.915866 32449 node_ready.go:38] duration metric: took 63.170605ms waiting for node "pause-20220725124607-24757" to be "Ready" ...
I0725 12:47:59.915875 32449 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.938916 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.970519 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:00.117232 32449 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513514 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.513523 32449 pod_ready.go:81] duration metric: took 396.286746ms waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513529 32449 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.539195 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539210 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539198 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539241 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539416 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539417 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539425 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539420 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539436 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539437 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539457 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539460 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539463 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539466 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539639 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539643 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539654 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539655 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539656 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539667 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539671 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539682 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539820 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539830 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.563090 32449 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0725 12:47:56.930830 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:57.431141 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:57.437183 32469 api_server.go:71] duration metric: took 22.018659349s to wait for apiserver process to appear ...
I0725 12:47:57.437203 32469 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:47:57.437218 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:00.653801 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0725 12:48:00.653817 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0725 12:48:01.154874 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:01.160888 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0725 12:48:01.160903 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0725 12:48:01.654077 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:01.658643 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0725 12:48:01.658662 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0725 12:48:02.155582 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:02.161203 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 200:
ok
I0725 12:48:02.165958 32469 api_server.go:140] control plane version: v1.17.0
I0725 12:48:02.165972 32469 api_server.go:130] duration metric: took 4.72885577s to wait for apiserver health ...
I0725 12:48:02.165978 32469 cni.go:95] Creating CNI manager for ""
I0725 12:48:02.165982 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:48:02.165991 32469 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:48:02.170002 32469 system_pods.go:59] 4 kube-system pods found
I0725 12:48:02.170018 32469 system_pods.go:61] "coredns-6955765f44-5jfdg" [5020da1b-6a45-4b39-802d-5c9520158377] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170023 32469 system_pods.go:61] "coredns-6955765f44-gnd7x" [0b93954b-6f29-427e-bd15-676a6271e58c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170047 32469 system_pods.go:61] "kube-proxy-fw74h" [696567b4-f041-40e0-9649-7fdddfa70df2] Pending
I0725 12:48:02.170051 32469 system_pods.go:61] "storage-provisioner" [af16d783-2ed9-45b6-ac15-a47946381e08] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170056 32469 system_pods.go:74] duration metric: took 4.061205ms to wait for pod list to return data ...
I0725 12:48:02.170062 32469 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:02.172207 32469 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0725 12:48:02.172219 32469 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:02.172226 32469 node_conditions.go:105] duration metric: took 2.160948ms to run NodePressure ...
I0725 12:48:02.172241 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:48:02.318381 32469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0725 12:48:02.325026 32469 ops.go:34] apiserver oom_adj: -16
I0725 12:48:02.325035 32469 kubeadm.go:630] restartCluster took 28.282153975s
I0725 12:48:02.325041 32469 kubeadm.go:397] StartCluster complete in 28.307791297s
I0725 12:48:02.325055 32469 settings.go:142] acquiring lock: {Name:mkd3ca246a72d4c75785a7cc650cfc3c06de2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:48:02.325122 32469 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:48:02.326257 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:48:02.327261 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:48:02.837236 32469 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "running-upgrade-20220725124546-24757" rescaled to 1
I0725 12:48:02.837279 32469 start.go:211] Will wait 6m0s for node &{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0725 12:48:02.837323 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0725 12:48:02.837381 32469 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0725 12:48:02.837464 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:48:02.858837 32469 addons.go:65] Setting default-storageclass=true in profile "running-upgrade-20220725124546-24757"
I0725 12:48:02.858847 32469 addons.go:65] Setting storage-provisioner=true in profile "running-upgrade-20220725124546-24757"
I0725 12:48:02.858871 32469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-20220725124546-24757"
I0725 12:48:02.858713 32469 out.go:177] * Verifying Kubernetes components...
I0725 12:48:02.858893 32469 addons.go:153] Setting addon storage-provisioner=true in "running-upgrade-20220725124546-24757"
W0725 12:48:02.858912 32469 addons.go:162] addon storage-provisioner should already be in state true
I0725 12:48:02.859522 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.895683 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:48:02.895717 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.895724 32469 host.go:66] Checking if "running-upgrade-20220725124546-24757" exists ...
I0725 12:48:02.897226 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.897825 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.902736 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51490
I0725 12:48:02.903140 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.903524 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.903535 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.903754 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.903848 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.903941 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.904026 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.904347 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51492
I0725 12:48:02.904625 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.904936 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.904954 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.905157 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.905503 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.905553 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.905568 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:48:02.911425 32469 addons.go:153] Setting addon default-storageclass=true in "running-upgrade-20220725124546-24757"
W0725 12:48:02.911442 32469 addons.go:162] addon default-storageclass should already be in state true
I0725 12:48:02.911462 32469 host.go:66] Checking if "running-upgrade-20220725124546-24757" exists ...
I0725 12:48:02.911756 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.911789 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.913915 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51494
I0725 12:48:02.914326 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.914784 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.914798 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.914994 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.915089 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.915170 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.915252 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.916056 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:48:02.918394 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51496
I0725 12:48:02.937579 32469 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:48:00.621161 32449 addons.go:414] enableAddons completed in 895.045234ms
I0725 12:48:00.914536 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.914568 32449 pod_ready.go:81] duration metric: took 401.042289ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.914575 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315399 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.315410 32449 pod_ready.go:81] duration metric: took 400.837301ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315417 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713652 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.713662 32449 pod_ready.go:81] duration metric: took 398.24262ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713669 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116833 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.116846 32449 pod_ready.go:81] duration metric: took 403.180188ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116857 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514872 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.514885 32449 pod_ready.go:81] duration metric: took 398.015294ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514892 32449 pod_ready.go:38] duration metric: took 2.599056789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:48:02.514914 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:48:02.514971 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:48:02.524797 32449 api_server.go:71] duration metric: took 2.798697005s to wait for apiserver process to appear ...
I0725 12:48:02.524812 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:48:02.524819 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:48:02.528761 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:48:02.529297 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:48:02.529305 32449 api_server.go:130] duration metric: took 4.48935ms to wait for apiserver health ...
I0725 12:48:02.529310 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:48:02.717715 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:48:02.717729 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:02.717733 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:02.717739 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:02.717743 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:02.717746 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:02.717750 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:02.717753 32449 system_pods.go:61] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:02.717757 32449 system_pods.go:74] duration metric: took 188.447508ms to wait for pod list to return data ...
I0725 12:48:02.717768 32449 default_sa.go:34] waiting for default service account to be created ...
I0725 12:48:02.914666 32449 default_sa.go:45] found service account: "default"
I0725 12:48:02.914676 32449 default_sa.go:55] duration metric: took 196.907597ms for default service account to be created ...
I0725 12:48:02.914681 32449 system_pods.go:116] waiting for k8s-apps to be running ...
I0725 12:48:03.116281 32449 system_pods.go:86] 7 kube-system pods found
I0725 12:48:03.116295 32449 system_pods.go:89] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:03.116300 32449 system_pods.go:89] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:03.116304 32449 system_pods.go:89] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:03.116307 32449 system_pods.go:89] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:03.116311 32449 system_pods.go:89] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:03.116314 32449 system_pods.go:89] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:03.116319 32449 system_pods.go:89] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:03.116334 32449 system_pods.go:126] duration metric: took 201.650654ms to wait for k8s-apps to be running ...
I0725 12:48:03.116348 32449 system_svc.go:44] waiting for kubelet service to be running ....
I0725 12:48:03.116413 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:48:03.126610 32449 system_svc.go:56] duration metric: took 10.263673ms WaitForService to wait for kubelet.
I0725 12:48:03.126626 32449 kubeadm.go:572] duration metric: took 3.400540205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0725 12:48:03.126644 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:03.314389 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:48:03.314403 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:03.314410 32449 node_conditions.go:105] duration metric: took 187.766416ms to run NodePressure ...
I0725 12:48:03.314435 32449 start.go:216] waiting for startup goroutines ...
I0725 12:48:03.348116 32449 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
I0725 12:48:02.938104 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.953776 32469 kubeadm.go:509] skip waiting for components based on config.
I0725 12:48:02.958690 32469 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:02.953806 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0725 12:48:02.958753 32469 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:02.958763 32469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0725 12:48:02.958775 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:48:02.958913 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:48:02.959060 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:48:02.959095 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.959105 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.959179 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:48:02.959273 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:48:02.959303 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.959664 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.959688 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.961635 32469 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0725 12:48:02.961655 32469 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:02.961666 32469 node_conditions.go:105] duration metric: took 2.969466ms to run NodePressure ...
I0725 12:48:02.961676 32469 start.go:216] waiting for startup goroutines ...
I0725 12:48:02.966514 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51499
I0725 12:48:02.966876 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.967251 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.967267 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.967478 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.967572 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.967663 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.967745 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.968571 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:48:02.968741 32469 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0725 12:48:02.968748 32469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0725 12:48:02.968760 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:48:02.968840 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:48:02.968961 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:48:02.969049 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:48:02.969156 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:48:03.048242 32469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:03.052162 32469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0725 12:48:03.300822 32469 start.go:809] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS
I0725 12:48:03.342252 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.342267 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.342566 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.342567 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.342579 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.342593 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.342602 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.342798 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.342807 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.342813 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355606 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355618 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.355775 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355776 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.355788 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.355795 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355802 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.355940 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.355954 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.355959 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355973 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355985 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.356133 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.356142 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.356142 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.423611 32449 out.go:177] * Done! kubectl is now configured to use "pause-20220725124607-24757" cluster and "default" namespace by default
I0725 12:48:03.498891 32469 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0725 12:48:03.573678 32469 addons.go:414] enableAddons completed in 736.324707ms
I0725 12:48:03.606529 32469 start.go:506] kubectl: 1.24.1, cluster: 1.17.0 (minor skew: 7)
I0725 12:48:03.643611 32469 out.go:177]
W0725 12:48:03.680965 32469 out.go:239] ! /usr/local/bin/kubectl is version 1.24.1, which may have incompatibilites with Kubernetes 1.17.0.
I0725 12:48:03.702704 32469 out.go:177] - Want kubectl v1.17.0? Try 'minikube kubectl -- get pods -A'
I0725 12:48:03.744700 32469 out.go:177] * Done! kubectl is now configured to use "running-upgrade-20220725124546-24757" cluster and "" namespace by default
*
* ==> Docker <==
* -- Journal begins at Mon 2022-07-25 19:46:16 UTC, ends at Mon 2022-07-25 19:48:04 UTC. --
Jul 25 19:47:43 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:43.316448750Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a51243e160348a9c7d895ff4b74f6db59fc3dee2a3ffb5381b3058049f35d0ca pid=5315 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288094287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288157378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288166564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288579698Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0b271d8323e469e9425826e335d92e59256ebd75ce42f8009b7a7279eefc07da pid=5509 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608781933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608856088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608865061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608968379Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bcc7e24af2f787666205ec9176991752cc04dba24e846dea67461ab2186560da pid=5555 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.703930249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704080973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704139164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704374110Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fceb0f5d9dc7201a35c92079ae95fed690deec2aa8c7e3005763dec6094d8a75 pid=5605 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764529291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764691208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764748598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764901997Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0dbd38c4ed4c42601d566168de86ef6e6b28cc24e4ae6eb8cf09a49921cd8491 pid=5642 runtime=io.containerd.runc.v2
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208204985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208247966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208260928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208617795Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/75583aa7957ba5b117c984936a2c407dab53b4eac952fb60df2da647aab86e92 pid=5919 runtime=io.containerd.runc.v2
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497457709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497537943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497546923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497968399Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f4d3b9b8fc44720aaf0a35ed3cd4bd0adbc8ef91a113205bdaaaa98a79defe00 pid=5962 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f4d3b9b8fc447 6e38f40d628db 4 seconds ago Running storage-provisioner 0 75583aa7957ba
0dbd38c4ed4c4 a634548d10b03 16 seconds ago Running kube-proxy 2 bcc7e24af2f78
fceb0f5d9dc72 a4ca41631cc7a 16 seconds ago Running coredns 2 0b271d8323e46
a51243e160348 5d725196c1f47 22 seconds ago Running kube-scheduler 2 5e9442d88e3be
0964c918df2bc aebe758cef4cd 23 seconds ago Running etcd 2 faca62db02339
974594e52480a 34cdf99b1bb3b 23 seconds ago Running kube-controller-manager 2 806492f0a2c24
7249d3d37a7d2 d3377ffb7177c 23 seconds ago Running kube-apiserver 2 4df5104a7ad6c
8abc60a3d3664 5d725196c1f47 37 seconds ago Exited kube-scheduler 1 fa5cdb6bc0bc1
148739a1c8bf7 a634548d10b03 37 seconds ago Exited kube-proxy 1 8c03efc958d74
6c4c14ed6c7bb a4ca41631cc7a 38 seconds ago Exited coredns 1 ac68acceae4b2
82a2874088cf8 34cdf99b1bb3b 50 seconds ago Exited kube-controller-manager 1 7c07ade5b55ec
4bbd9292ccc1e d3377ffb7177c 51 seconds ago Exited kube-apiserver 1 fdfac1f68e49f
aa9e0a649a58c aebe758cef4cd 51 seconds ago Exited etcd 1 8dc345a99c847
*
* ==> coredns [6c4c14ed6c7b] <==
* [INFO] SIGTERM: Shutting down servers then terminating
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/health: Going into lameduck mode for 5s
[ERROR] plugin/errors: 2 9149239292398430472.1479233848501534760. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
[WARNING] plugin/health: Local health request to "http://:8080/health" failed: Get "http://:8080/health": dial tcp :8080: connect: connection reset by peer
[ERROR] plugin/errors: 2 9149239292398430472.1479233848501534760. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
*
* ==> coredns [fceb0f5d9dc7] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: pause-20220725124607-24757
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220725124607-24757
kubernetes.io/os=linux
minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
minikube.k8s.io/name=pause-20220725124607-24757
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_25T12_46_46_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 25 Jul 2022 19:46:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220725124607-24757
AcquireTime: <unset>
RenewTime: Mon, 25 Jul 2022 19:47:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.23
Hostname: pause-20220725124607-24757
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017588Ki
pods: 110
System Info:
Machine ID: 0b81cda4acc64db9b36933459060308a
System UUID: 6d8c11ed-0000-0000-b12e-149d997cd0f1
Boot ID: 3b06b3c4-9b77-48bb-ad0b-163ba01d6234
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-wnp4h 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 65s
kube-system etcd-pause-20220725124607-24757 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-apiserver-pause-20220725124607-24757 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-controller-manager-pause-20220725124607-24757 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 79s
kube-system kube-proxy-vvgjh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 66s
kube-system kube-scheduler-pause-20220725124607-24757 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 78s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 64s kube-proxy
Normal Starting 15s kube-proxy
Normal NodeAllocatableEnforced 90s kubelet Updated Node Allocatable limit across pods
Normal Starting 90s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 90s (x4 over 90s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 90s (x3 over 90s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 90s (x3 over 90s) kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal NodeReady 79s kubelet Node pause-20220725124607-24757 status is now: NodeReady
Normal Starting 79s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 79s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 79s kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 79s kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 79s kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 66s node-controller Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller
Normal Starting 24s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 24s (x8 over 24s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 24s (x8 over 24s) kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 24s (x7 over 24s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 24s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5s node-controller Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.939705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +3.660330] systemd-fstab-generator[552]: Ignoring "noauto" for root device
[ +0.094408] systemd-fstab-generator[563]: Ignoring "noauto" for root device
[ +5.949424] systemd-fstab-generator[783]: Ignoring "noauto" for root device
[ +1.347769] kauditd_printk_skb: 16 callbacks suppressed
[ +0.237236] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +0.086452] systemd-fstab-generator[954]: Ignoring "noauto" for root device
[ +0.092085] systemd-fstab-generator[965]: Ignoring "noauto" for root device
[ +1.398983] systemd-fstab-generator[1115]: Ignoring "noauto" for root device
[ +0.091065] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
[ +3.454273] systemd-fstab-generator[1352]: Ignoring "noauto" for root device
[ +0.498420] kauditd_printk_skb: 68 callbacks suppressed
[ +11.241941] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
[Jul25 19:47] kauditd_printk_skb: 7 callbacks suppressed
[ +5.470060] systemd-fstab-generator[2934]: Ignoring "noauto" for root device
[ +0.120849] systemd-fstab-generator[2945]: Ignoring "noauto" for root device
[ +0.127788] systemd-fstab-generator[2956]: Ignoring "noauto" for root device
[ +0.332668] kauditd_printk_skb: 16 callbacks suppressed
[ +20.655882] systemd-fstab-generator[4010]: Ignoring "noauto" for root device
[ +0.127786] systemd-fstab-generator[4103]: Ignoring "noauto" for root device
[ +14.271837] systemd-fstab-generator[4861]: Ignoring "noauto" for root device
[ +7.791947] kauditd_printk_skb: 31 callbacks suppressed
*
* ==> etcd [0964c918df2b] <==
* {"level":"info","ts":"2022-07-25T19:47:43.971Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"358a38a4be5dda21","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-07-25T19:47:43.980Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 switched to configuration voters=(3857958311015864865)"}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","added-peer-id":"358a38a4be5dda21","added-peer-peer-urls":["https://192.168.64.23:2380"]}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:43.982Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-25T19:47:43.988Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:43.982Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:43.988Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-25T19:47:43.989Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.746Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:pause-20220725124607-24757 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-25T19:47:45.746Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-25T19:47:45.748Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
*
* ==> etcd [aa9e0a649a58] <==
* {"level":"info","ts":"2022-07-25T19:47:15.069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:15.070Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-25T19:47:15.070Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.363Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:pause-20220725124607-24757 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-25T19:47:16.363Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
{"level":"info","ts":"2022-07-25T19:47:16.390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-25T19:47:16.390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-25T19:47:16.689Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-07-25T19:47:16.689Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220725124607-24757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"]}
WARNING: 2022/07/25 19:47:16 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/07/25 19:47:16 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.23:2379 192.168.64.23:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.23:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-07-25T19:47:16.701Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"358a38a4be5dda21","current-leader-member-id":"358a38a4be5dda21"}
{"level":"info","ts":"2022-07-25T19:47:16.735Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:16.738Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:16.738Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220725124607-24757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"]}
*
* ==> kernel <==
* 19:48:06 up 1 min, 0 users, load average: 0.94, 0.45, 0.17
Linux pause-20220725124607-24757 5.10.57 #1 SMP Sat Jul 9 07:31:52 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4bbd9292ccc1] <==
* W0725 19:47:21.977393 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.106201 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.134715 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.144887 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.219193 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.241381 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.259203 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.314408 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.401529 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:24.786402 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.181079 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.192532 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.220692 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.228908 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.439361 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.786812 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.821600 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.873320 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.105529 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.108720 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.124114 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.230369 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.288934 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.340338 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.486859 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
*
* ==> kube-apiserver [7249d3d37a7d] <==
* I0725 19:47:47.552808 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0725 19:47:47.552901 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0725 19:47:47.552977 1 crd_finalizer.go:266] Starting CRDFinalizer
I0725 19:47:47.563204 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0725 19:47:47.587654 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0725 19:47:47.588154 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0725 19:47:47.588162 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0725 19:47:47.649579 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0725 19:47:47.649937 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0725 19:47:47.651903 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0725 19:47:47.652260 1 cache.go:39] Caches are synced for autoregister controller
I0725 19:47:47.652539 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0725 19:47:47.652601 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0725 19:47:47.684977 1 shared_informer.go:262] Caches are synced for node_authorizer
I0725 19:47:47.689739 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0725 19:47:48.303009 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0725 19:47:48.551057 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0725 19:47:49.169392 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0725 19:47:49.179455 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0725 19:47:49.206963 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0725 19:47:49.217169 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0725 19:47:49.221506 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0725 19:47:49.921186 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0725 19:48:00.206294 1 controller.go:611] quota admission added evaluator for: endpoints
I0725 19:48:00.275290 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [82a2874088cf] <==
* I0725 19:47:16.319977 1 serving.go:348] Generated self-signed cert in-memory
*
* ==> kube-controller-manager [974594e52480] <==
* I0725 19:48:00.216545 1 shared_informer.go:262] Caches are synced for namespace
I0725 19:48:00.216556 1 shared_informer.go:262] Caches are synced for GC
I0725 19:48:00.218926 1 shared_informer.go:262] Caches are synced for deployment
I0725 19:48:00.220397 1 shared_informer.go:262] Caches are synced for stateful set
I0725 19:48:00.224287 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0725 19:48:00.225610 1 shared_informer.go:262] Caches are synced for expand
I0725 19:48:00.227885 1 shared_informer.go:262] Caches are synced for PVC protection
I0725 19:48:00.241304 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0725 19:48:00.241831 1 shared_informer.go:262] Caches are synced for crt configmap
I0725 19:48:00.267428 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0725 19:48:00.267536 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0725 19:48:00.352108 1 shared_informer.go:262] Caches are synced for taint
I0725 19:48:00.352228 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
I0725 19:48:00.352235 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W0725 19:48:00.352273 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220725124607-24757. Assuming now as a timestamp.
I0725 19:48:00.352298 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0725 19:48:00.352364 1 event.go:294] "Event occurred" object="pause-20220725124607-24757" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller"
I0725 19:48:00.422924 1 shared_informer.go:262] Caches are synced for cronjob
I0725 19:48:00.436655 1 shared_informer.go:262] Caches are synced for resource quota
I0725 19:48:00.442616 1 shared_informer.go:262] Caches are synced for resource quota
I0725 19:48:00.452304 1 shared_informer.go:262] Caches are synced for TTL after finished
I0725 19:48:00.457658 1 shared_informer.go:262] Caches are synced for job
I0725 19:48:00.869039 1 shared_informer.go:262] Caches are synced for garbage collector
I0725 19:48:00.921143 1 shared_informer.go:262] Caches are synced for garbage collector
I0725 19:48:00.921159 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [0dbd38c4ed4c] <==
* I0725 19:47:49.892726 1 node.go:163] Successfully retrieved node IP: 192.168.64.23
I0725 19:47:49.892777 1 server_others.go:138] "Detected node IP" address="192.168.64.23"
I0725 19:47:49.892794 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0725 19:47:49.916321 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0725 19:47:49.916354 1 server_others.go:206] "Using iptables Proxier"
I0725 19:47:49.916374 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0725 19:47:49.916822 1 server.go:661] "Version info" version="v1.24.2"
I0725 19:47:49.916849 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0725 19:47:49.917568 1 config.go:317] "Starting service config controller"
I0725 19:47:49.917598 1 shared_informer.go:255] Waiting for caches to sync for service config
I0725 19:47:49.917617 1 config.go:226] "Starting endpoint slice config controller"
I0725 19:47:49.917620 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0725 19:47:49.919327 1 config.go:444] "Starting node config controller"
I0725 19:47:49.919353 1 shared_informer.go:255] Waiting for caches to sync for node config
I0725 19:47:50.018684 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0725 19:47:50.018771 1 shared_informer.go:262] Caches are synced for service config
I0725 19:47:50.019793 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [148739a1c8bf] <==
* E0725 19:47:28.237731 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725124607-24757": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:29.292146 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725124607-24757": dial tcp 192.168.64.23:8443: connect: connection refused
*
* ==> kube-scheduler [8abc60a3d366] <==
* W0725 19:47:30.218974 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.23:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.219370 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.23:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.366868 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.64.23:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.366924 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.23:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.407825 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.408146 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.419525 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.64.23:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.419754 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.23:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.433625 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.23:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.433671 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.23:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.609277 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.64.23:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.609319 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.23:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.628983 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.629024 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.660487 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.64.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.660554 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.728171 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.64.23:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.728211 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.64.23:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.744074 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.744102 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.749994 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.64.23:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.750011 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.23:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
I0725 19:47:31.226628 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0725 19:47:31.226954 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0725 19:47:31.226972 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [a51243e16034] <==
* W0725 19:47:47.630044 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0725 19:47:47.630073 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0725 19:47:47.630219 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0725 19:47:47.630288 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0725 19:47:47.630381 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.630409 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.630582 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.630695 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.630837 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0725 19:47:47.630865 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0725 19:47:47.631066 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0725 19:47:47.631095 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0725 19:47:47.631291 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.631320 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.631514 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0725 19:47:47.631543 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0725 19:47:47.633716 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0725 19:47:47.633745 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0725 19:47:47.633856 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0725 19:47:47.633996 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0725 19:47:47.633930 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0725 19:47:47.634128 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0725 19:47:47.638899 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0725 19:47:47.638930 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0725 19:47:49.315516 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Mon 2022-07-25 19:46:16 UTC, ends at Mon 2022-07-25 19:48:07 UTC. --
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.424330 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.525057 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.625805 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.658326 4867 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220725124607-24757"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.658544 4867 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220725124607-24757"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.660223 4867 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.661058 4867 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.641504 4867 apiserver.go:52] "Watching apiserver"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643024 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643091 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643118 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.736645 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc6970ad-eca0-464d-a5c0-5eecee54875c-kube-proxy\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.736905 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc6970ad-eca0-464d-a5c0-5eecee54875c-xtables-lock\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737135 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swc2d\" (UniqueName: \"kubernetes.io/projected/cc6970ad-eca0-464d-a5c0-5eecee54875c-kube-api-access-swc2d\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737412 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4b7\" (UniqueName: \"kubernetes.io/projected/6b4a2096-027b-40d7-8f3f-f2e78d7f76c7-kube-api-access-4z4b7\") pod \"coredns-6d4b75cb6d-wnp4h\" (UID: \"6b4a2096-027b-40d7-8f3f-f2e78d7f76c7\") " pod="kube-system/coredns-6d4b75cb6d-wnp4h"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737631 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b4a2096-027b-40d7-8f3f-f2e78d7f76c7-config-volume\") pod \"coredns-6d4b75cb6d-wnp4h\" (UID: \"6b4a2096-027b-40d7-8f3f-f2e78d7f76c7\") " pod="kube-system/coredns-6d4b75cb6d-wnp4h"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737898 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc6970ad-eca0-464d-a5c0-5eecee54875c-lib-modules\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.738046 4867 reconciler.go:157] "Reconciler: start to sync state"
Jul 25 19:47:51 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:51.199283 4867 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 25 19:47:51 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:51.754021 4867 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bfdceddb-f0ec-481c-a4a2-ce56bb133d27 path="/var/lib/kubelet/pods/bfdceddb-f0ec-481c-a4a2-ce56bb133d27/volumes"
Jul 25 19:47:59 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:59.043070 4867 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.855095 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.935818 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d189436-f57b-4db0-a2c3-534d702f468f-tmp\") pod \"storage-provisioner\" (UID: \"7d189436-f57b-4db0-a2c3-534d702f468f\") " pod="kube-system/storage-provisioner"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.936010 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbpr\" (UniqueName: \"kubernetes.io/projected/7d189436-f57b-4db0-a2c3-534d702f468f-kube-api-access-5tbpr\") pod \"storage-provisioner\" (UID: \"7d189436-f57b-4db0-a2c3-534d702f468f\") " pod="kube-system/storage-provisioner"
Jul 25 19:48:01 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:01.459786 4867 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="75583aa7957ba5b117c984936a2c407dab53b4eac952fb60df2da647aab86e92"
*
* ==> storage-provisioner [f4d3b9b8fc44] <==
* I0725 19:48:01.570677 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0725 19:48:01.581569 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0725 19:48:01.581924 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0725 19:48:01.593967 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0725 19:48:01.594237 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2!
I0725 19:48:01.595287 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"608c730e-6eca-4f99-a3f3-38ad329fea2b", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2 became leader
I0725 19:48:01.694800 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725124607-24757 -n pause-20220725124607-24757
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220725124607-24757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context pause-20220725124607-24757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.491863603s)
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220725124607-24757 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220725124607-24757 describe pod : exit status 1 (33.271319ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220725124607-24757 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220725124607-24757 -n pause-20220725124607-24757
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-darwin-amd64 -p pause-20220725124607-24757 logs -n 25
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220725124607-24757 logs -n 25: (3.72986569s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | 25 Jul 22 12:39 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --cancel-scheduled | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| stop | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:39 PDT | 25 Jul 22 12:40 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| | --schedule 15s | | | | | |
| delete | -p | scheduled-stop-20220725123836-24757 | jenkins | v1.26.0 | 25 Jul 22 12:40 PDT | 25 Jul 22 12:40 PDT |
| | scheduled-stop-20220725123836-24757 | | | | | |
| start | -p | skaffold-20220725124025-24757 | jenkins | v1.26.0 | 25 Jul 22 12:40 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| | --memory=2600 | | | | | |
| | --driver=hyperkit | | | | | |
| docker-env | --shell none -p | skaffold-20220725124025-24757 | skaffold | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| | --user=skaffold | | | | | |
| delete | -p | skaffold-20220725124025-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:41 PDT |
| | skaffold-20220725124025-24757 | | | | | |
| start | -p | offline-docker-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:43 PDT |
| | offline-docker-20220725124139-24757 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --memory=2048 --wait=true | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:41 PDT | 25 Jul 22 12:42 PDT |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=hyperkit | | | | | |
| ssh | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:42 PDT |
| | pgrep -a kubelet | | | | | |
| delete | -p auto-20220725124139-24757 | auto-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:42 PDT |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:42 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | offline-docker-20220725124139-24757 | jenkins | v1.26.0 | 25 Jul 22 12:43 PDT | 25 Jul 22 12:43 PDT |
| | offline-docker-20220725124139-24757 | | | | | |
| stop | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:44 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:44 PDT | 25 Jul 22 12:45 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.24.2 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | stopped-upgrade-20220725124328-24757 | jenkins | v1.26.0 | 25 Jul 22 12:45 PDT | 25 Jul 22 12:46 PDT |
| | stopped-upgrade-20220725124328-24757 | | | | | |
| | --memory=2200 --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | kubernetes-upgrade-20220725124257-24757 | jenkins | v1.26.0 | 25 Jul 22 12:45 PDT | 25 Jul 22 12:45 PDT |
| | kubernetes-upgrade-20220725124257-24757 | | | | | |
| delete | -p | stopped-upgrade-20220725124328-24757 | jenkins | v1.26.0 | 25 Jul 22 12:46 PDT | 25 Jul 22 12:46 PDT |
| | stopped-upgrade-20220725124328-24757 | | | | | |
| start | -p pause-20220725124607-24757 | pause-20220725124607-24757 | jenkins | v1.26.0 | 25 Jul 22 12:46 PDT | 25 Jul 22 12:47 PDT |
| | --memory=2048 | | | | | |
| | --install-addons=false | | | | | |
| | --wait=all --driver=hyperkit | | | | | |
| start | -p pause-20220725124607-24757 | pause-20220725124607-24757 | jenkins | v1.26.0 | 25 Jul 22 12:47 PDT | 25 Jul 22 12:48 PDT |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| start | -p | running-upgrade-20220725124546-24757 | jenkins | v1.26.0 | 25 Jul 22 12:47 PDT | 25 Jul 22 12:48 PDT |
| | running-upgrade-20220725124546-24757 | | | | | |
| | --memory=2200 --alsologtostderr -v=1 | | | | | |
| | --driver=hyperkit | | | | | |
| delete | -p | running-upgrade-20220725124546-24757 | jenkins | v1.26.0 | 25 Jul 22 12:48 PDT | |
| | running-upgrade-20220725124546-24757 | | | | | |
|------------|-----------------------------------------|-----------------------------------------|----------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/07/25 12:47:16
Running on machine: MacOS-Agent-1
Binary: Built with gc go1.18.3 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0725 12:47:16.479890 32469 out.go:296] Setting OutFile to fd 1 ...
I0725 12:47:16.480510 32469 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:16.480517 32469 out.go:309] Setting ErrFile to fd 2...
I0725 12:47:16.480525 32469 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0725 12:47:16.480773 32469 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin
I0725 12:47:16.481797 32469 out.go:303] Setting JSON to false
I0725 12:47:16.498142 32469 start.go:115] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":10009,"bootTime":1658768427,"procs":360,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.4","kernelVersion":"21.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
W0725 12:47:16.498280 32469 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0725 12:47:16.537347 32469 out.go:177] * [running-upgrade-20220725124546-24757] minikube v1.26.0 on Darwin 12.4
I0725 12:47:16.573253 32469 notify.go:193] Checking for updates...
I0725 12:47:16.610977 32469 out.go:177] - MINIKUBE_LOCATION=14555
I0725 12:47:16.687221 32469 out.go:177] - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:16.763200 32469 out.go:177] - MINIKUBE_BIN=out/minikube-darwin-amd64
I0725 12:47:16.822085 32469 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0725 12:47:16.865985 32469 out.go:177] - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube
I0725 12:47:16.887763 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:47:16.887795 32469 start_flags.go:627] config upgrade: Driver=hyperkit
I0725 12:47:16.887807 32469 start_flags.go:639] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
I0725 12:47:16.887931 32469 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/config.json ...
I0725 12:47:16.889381 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:16.889436 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:16.896256 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51439
I0725 12:47:16.896612 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:16.896994 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:16.897005 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:16.897207 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:16.897330 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:16.919047 32469 out.go:177] * Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
I0725 12:47:16.940078 32469 driver.go:365] Setting default libvirt URI to qemu:///system
I0725 12:47:16.940624 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:16.940682 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:16.948084 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51441
I0725 12:47:16.948480 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:16.948812 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:16.948823 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:16.949041 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:16.949126 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:16.975911 32469 out.go:177] * Using the hyperkit driver based on existing profile
I0725 12:47:16.997031 32469 start.go:284] selected driver: hyperkit
I0725 12:47:16.997054 32469 start.go:808] validating driver "hyperkit" against &{Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{Kuber
netesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:16.997197 32469 start.go:819] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0725 12:47:16.999295 32469 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:16.999403 32469 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0725 12:47:17.005471 32469 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.26.0
I0725 12:47:17.008449 32469 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:17.008472 32469 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
I0725 12:47:17.008544 32469 cni.go:95] Creating CNI manager for ""
I0725 12:47:17.008554 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:17.008567 32469 start_flags.go:310] config:
{Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:17.008691 32469 iso.go:128] acquiring lock: {Name:mk75e62a3ceeaef3aefa2a3a9c617c6e59d820a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.030213 32469 out.go:177] * Starting control plane node running-upgrade-20220725124546-24757 in cluster running-upgrade-20220725124546-24757
I0725 12:47:17.052085 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
W0725 12:47:17.129419 32469 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
I0725 12:47:17.129570 32469 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/config.json ...
I0725 12:47:17.129705 32469 cache.go:107] acquiring lock: {Name:mkc10c9c66e179cd4a0dc6e8fa7072246b41ed8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129709 32469 cache.go:107] acquiring lock: {Name:mk17fee4f7d14c3244831bbcf83d4048b5bf85ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129750 32469 cache.go:107] acquiring lock: {Name:mk3a8071de70e33fc08172e48377685e9806cd28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129908 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
I0725 12:47:17.129808 32469 cache.go:107] acquiring lock: {Name:mk48edca73ba098a628de4d6b84f553475ca8419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129944 32469 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 260.61µs
I0725 12:47:17.129954 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
I0725 12:47:17.129972 32469 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
I0725 12:47:17.129955 32469 cache.go:107] acquiring lock: {Name:mk9d3d9189d65cdbe444cdf74de19f91817d64ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.129972 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I0725 12:47:17.129993 32469 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 284.206µs
I0725 12:47:17.130026 32469 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 337.2µs
I0725 12:47:17.130058 32469 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
I0725 12:47:17.130045 32469 cache.go:107] acquiring lock: {Name:mk56000c8091f0f3f746944023388a5d091f1f39 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130069 32469 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I0725 12:47:17.130001 32469 cache.go:107] acquiring lock: {Name:mk0254551fd10ae756e3fd2ab6128ea499634bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130143 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
I0725 12:47:17.129993 32469 cache.go:107] acquiring lock: {Name:mka4d2d18f2170bd8ec63c8694b1dcb2ae884cf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0725 12:47:17.130161 32469 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 217.443µs
I0725 12:47:17.130194 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
I0725 12:47:17.130122 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
I0725 12:47:17.130217 32469 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 274.811µs
I0725 12:47:17.130212 32469 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
I0725 12:47:17.130234 32469 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
I0725 12:47:17.130249 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
I0725 12:47:17.130248 32469 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 468.611µs
I0725 12:47:17.130257 32469 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
I0725 12:47:17.130279 32469 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
I0725 12:47:17.130282 32469 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 304.845µs
I0725 12:47:17.130284 32469 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 397.407µs
I0725 12:47:17.130294 32469 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
I0725 12:47:17.130303 32469 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0725 12:47:17.130317 32469 cache.go:87] Successfully saved all images to host disk.
I0725 12:47:17.130441 32469 cache.go:208] Successfully downloaded all kic artifacts
I0725 12:47:17.130487 32469 start.go:370] acquiring machines lock for running-upgrade-20220725124546-24757: {Name:mk6dd10c27893192a420c40bba76224953275f58 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0725 12:47:17.130563 32469 start.go:374] acquired machines lock for "running-upgrade-20220725124546-24757" in 59.066µs
I0725 12:47:17.130591 32469 start.go:95] Skipping create...Using existing machine configuration
I0725 12:47:17.130608 32469 fix.go:55] fixHost starting: minikube
I0725 12:47:17.131033 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:17.131062 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:17.137985 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51443
I0725 12:47:17.138349 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:17.138653 32469 main.go:134] libmachine: Using API Version 1
I0725 12:47:17.138663 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:17.138899 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:17.139014 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.139093 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:47:17.139180 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:17.139253 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:47:17.139989 32469 fix.go:103] recreateIfNeeded on running-upgrade-20220725124546-24757: state=Running err=<nil>
W0725 12:47:17.140003 32469 fix.go:129] unexpected machine state, will restart: <nil>
I0725 12:47:17.183090 32469 out.go:177] * Updating the running hyperkit "running-upgrade-20220725124546-24757" VM ...
I0725 12:47:17.220993 32469 machine.go:88] provisioning docker machine ...
I0725 12:47:17.221027 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.221327 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.221510 32469 buildroot.go:166] provisioning hostname "running-upgrade-20220725124546-24757"
I0725 12:47:17.221536 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.221710 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.221900 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.222112 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.222273 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.222398 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.222577 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.222794 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.222807 32469 main.go:134] libmachine: About to run SSH command:
sudo hostname running-upgrade-20220725124546-24757 && echo "running-upgrade-20220725124546-24757" | sudo tee /etc/hostname
I0725 12:47:17.294480 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220725124546-24757
I0725 12:47:17.294498 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.294635 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.294737 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.294834 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.294962 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.295088 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.295210 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.295222 32469 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\srunning-upgrade-20220725124546-24757' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220725124546-24757/g' /etc/hosts;
else
echo '127.0.1.1 running-upgrade-20220725124546-24757' | sudo tee -a /etc/hosts;
fi
fi
I0725 12:47:17.360741 32469 main.go:134] libmachine: SSH cmd err, output: <nil>:
I0725 12:47:17.360770 32469 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem ServerCertRemo
tePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube}
I0725 12:47:17.360784 32469 buildroot.go:174] setting up certificates
I0725 12:47:17.360793 32469 provision.go:83] configureAuth start
I0725 12:47:17.360800 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetMachineName
I0725 12:47:17.360920 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetIP
I0725 12:47:17.361011 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.361090 32469 provision.go:138] copyHostCerts
I0725 12:47:17.361156 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem, removing ...
I0725 12:47:17.361164 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem
I0725 12:47:17.361279 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.pem (1078 bytes)
I0725 12:47:17.361459 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem, removing ...
I0725 12:47:17.361465 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem
I0725 12:47:17.361527 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cert.pem (1123 bytes)
I0725 12:47:17.361649 32469 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem, removing ...
I0725 12:47:17.361655 32469 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem
I0725 12:47:17.361716 32469 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/key.pem (1679 bytes)
I0725 12:47:17.361827 32469 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220725124546-24757 san=[192.168.64.22 192.168.64.22 localhost 127.0.0.1 minikube running-upgrade-20220725124546-24757]
I0725 12:47:17.441821 32469 provision.go:172] copyRemoteCerts
I0725 12:47:17.441875 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0725 12:47:17.441892 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.442065 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.442221 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.442400 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.442647 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:17.479745 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0725 12:47:17.488907 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
I0725 12:47:17.497728 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0725 12:47:17.506911 32469 provision.go:86] duration metric: configureAuth took 146.110362ms
I0725 12:47:17.506928 32469 buildroot.go:189] setting minikube options for container-runtime
I0725 12:47:17.507036 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:47:17.507048 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:17.507168 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.507259 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.507337 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.507412 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.507494 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.507590 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.507685 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.507693 32469 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0725 12:47:17.572899 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
I0725 12:47:17.572915 32469 buildroot.go:70] root file system type: tmpfs
I0725 12:47:17.573048 32469 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0725 12:47:17.573066 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.573194 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.573294 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.573383 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.573484 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.573610 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.573718 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.573767 32469 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0725 12:47:17.644436 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0725 12:47:17.644460 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:17.644585 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:17.644687 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.644778 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:17.644883 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:17.645028 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:17.645136 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:17.645150 32469 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0725 12:47:26.735641 32449 ssh_runner.go:235] Completed: sudo systemctl restart docker: (20.847699661s)
I0725 12:47:26.735697 32449 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0725 12:47:26.858226 32449 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:26.963261 32449 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I0725 12:47:26.976260 32449 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0725 12:47:26.976340 32449 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0725 12:47:26.986352 32449 start.go:471] Will wait 60s for crictl version
I0725 12:47:26.986413 32449 ssh_runner.go:195] Run: sudo crictl version
I0725 12:47:27.022115 32449 start.go:480] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.17
RuntimeApiVersion: 1.41.0
I0725 12:47:27.022179 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.064171 32449 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:27.168890 32449 out.go:204] * Preparing Kubernetes v1.24.2 on Docker 20.10.17 ...
I0725 12:47:27.168984 32449 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0725 12:47:27.171936 32449 preload.go:132] Checking if preload exists for k8s version v1.24.2 and runtime docker
I0725 12:47:27.171995 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.196306 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.196318 32449 docker.go:542] Images already preloaded, skipping extraction
I0725 12:47:27.196381 32449 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:27.220719 32449 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-apiserver:v1.24.2
k8s.gcr.io/kube-scheduler:v1.24.2
k8s.gcr.io/kube-controller-manager:v1.24.2
k8s.gcr.io/kube-proxy:v1.24.2
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/pause:3.7
k8s.gcr.io/coredns/coredns:v1.8.6
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0725 12:47:27.220737 32449 cache_images.go:84] Images are preloaded, skipping loading
I0725 12:47:27.220897 32449 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0725 12:47:27.264481 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:27.264492 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:27.264506 32449 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0725 12:47:27.264519 32449 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.23 APIServerPort:8443 KubernetesVersion:v1.24.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220725124607-24757 NodeName:pause-20220725124607-24757 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.23 CgroupDriver:systemd ClientCAFile:/var/lib
/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0725 12:47:27.264608 32449 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.23
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-20220725124607-24757"
kubeletExtraArgs:
node-ip: 192.168.64.23
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.23"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.24.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0725 12:47:27.264672 32449 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.24.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220725124607-24757 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.23 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0725 12:47:27.264720 32449 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.2
I0725 12:47:27.270741 32449 binaries.go:44] Found k8s binaries, skipping transfer
I0725 12:47:27.270790 32449 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0725 12:47:27.276381 32449 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (489 bytes)
I0725 12:47:27.286602 32449 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0725 12:47:27.298589 32449 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
I0725 12:47:27.320362 32449 ssh_runner.go:195] Run: grep 192.168.64.23 control-plane.minikube.internal$ /etc/hosts
I0725 12:47:27.327321 32449 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757 for IP: 192.168.64.23
I0725 12:47:27.327422 32449 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
I0725 12:47:27.327476 32449 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
I0725 12:47:27.327554 32449 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.key
I0725 12:47:27.327623 32449 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key.7d9037ca
I0725 12:47:27.327670 32449 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key
I0725 12:47:27.327873 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem (1338 bytes)
W0725 12:47:27.327912 32449 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757_empty.pem, impossibly tiny 0 bytes
I0725 12:47:27.327925 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
I0725 12:47:27.327955 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1078 bytes)
I0725 12:47:27.327988 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
I0725 12:47:27.328016 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1679 bytes)
I0725 12:47:27.328090 32449 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:27.328573 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0725 12:47:27.360725 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0725 12:47:27.387942 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0725 12:47:27.427683 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0725 12:47:27.447934 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0725 12:47:27.464461 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0725 12:47:27.480792 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0725 12:47:27.496689 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0725 12:47:27.512885 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0725 12:47:27.528701 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem --> /usr/share/ca-certificates/24757.pem (1338 bytes)
I0725 12:47:27.547370 32449 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /usr/share/ca-certificates/247572.pem (1708 bytes)
I0725 12:47:27.588968 32449 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0725 12:47:27.600771 32449 ssh_runner.go:195] Run: openssl version
I0725 12:47:27.604305 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0725 12:47:27.612006 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616154 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:54 /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.616190 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:27.623374 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0725 12:47:27.637254 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24757.pem && ln -fs /usr/share/ca-certificates/24757.pem /etc/ssl/certs/24757.pem"
I0725 12:47:27.647841 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651408 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:57 /usr/share/ca-certificates/24757.pem
I0725 12:47:27.651458 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24757.pem
I0725 12:47:27.655546 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24757.pem /etc/ssl/certs/51391683.0"
I0725 12:47:27.662604 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247572.pem && ln -fs /usr/share/ca-certificates/247572.pem /etc/ssl/certs/247572.pem"
I0725 12:47:27.670270 32449 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673388 32449 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:57 /usr/share/ca-certificates/247572.pem
I0725 12:47:27.673431 32449 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247572.pem
I0725 12:47:27.682884 32449 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/247572.pem /etc/ssl/certs/3ec20f2e.0"
I0725 12:47:27.697969 32449 kubeadm.go:395] StartCluster: {Name:pause-20220725124607-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.2 ClusterName:pause-20220725124607-24757 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:27.698088 32449 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:27.751529 32449 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0725 12:47:27.761108 32449 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0725 12:47:27.761129 32449 kubeadm.go:626] restartCluster start
I0725 12:47:27.761186 32449 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0725 12:47:27.797629 32449 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0725 12:47:27.798037 32449 kubeconfig.go:92] found "pause-20220725124607-24757" server: "https://192.168.64.23:8443"
I0725 12:47:27.798425 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:27.799059 32449 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0725 12:47:27.805671 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:27.805715 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:27.822205 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.022382 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.022446 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.034958 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.222415 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.222472 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.237243 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.423027 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.423150 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.432259 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.622863 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.622923 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.631111 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:28.822595 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:28.822726 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:28.831432 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.277408 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
+++ /lib/systemd/system/docker.service.new
@@ -3,9 +3,12 @@
Documentation=https://docs.docker.com
After=network.target minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
+Restart=on-failure
@@ -21,7 +24,7 @@
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
-ExecReload=/bin/kill -s HUP
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
I0725 12:47:29.277420 32469 machine.go:91] provisioned docker machine in 12.056643231s
I0725 12:47:29.277434 32469 start.go:307] post-start starting for "running-upgrade-20220725124546-24757" (driver="hyperkit")
I0725 12:47:29.277440 32469 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0725 12:47:29.277451 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.277626 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0725 12:47:29.277638 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.277741 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.277814 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.277929 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.278009 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.314489 32469 ssh_runner.go:195] Run: cat /etc/os-release
I0725 12:47:29.317079 32469 info.go:137] Remote host: Buildroot 2019.02.7
I0725 12:47:29.317093 32469 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/addons for local assets ...
I0725 12:47:29.317198 32469 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files for local assets ...
I0725 12:47:29.317334 32469 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem -> 247572.pem in /etc/ssl/certs
I0725 12:47:29.317487 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0725 12:47:29.321269 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:29.330125 32469 start.go:310] post-start completed in 52.683944ms
I0725 12:47:29.330138 32469 fix.go:57] fixHost completed within 12.199776264s
I0725 12:47:29.330151 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.330290 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.330404 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.330506 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.330607 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.330724 32469 main.go:134] libmachine: Using SSH client type: native
I0725 12:47:29.330829 32469 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil> [] 0s} 192.168.64.22 22 <nil> <nil>}
I0725 12:47:29.330836 32469 main.go:134] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0725 12:47:29.398680 32469 main.go:134] libmachine: SSH cmd err, output: <nil>: 1658778449.704738852
I0725 12:47:29.398690 32469 fix.go:207] guest clock: 1658778449.704738852
I0725 12:47:29.398695 32469 fix.go:220] Guest: 2022-07-25 12:47:29.704738852 -0700 PDT Remote: 2022-07-25 12:47:29.33014 -0700 PDT m=+12.897586139 (delta=374.598852ms)
I0725 12:47:29.398714 32469 fix.go:191] guest clock delta is within tolerance: 374.598852ms
I0725 12:47:29.398718 32469 start.go:82] releasing machines lock for "running-upgrade-20220725124546-24757", held for 12.268384436s
I0725 12:47:29.398736 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.398865 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetIP
I0725 12:47:29.398966 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399075 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399189 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399504 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399599 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:47:29.399661 32469 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
I0725 12:47:29.399688 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.399747 32469 ssh_runner.go:195] Run: systemctl --version
I0725 12:47:29.399760 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:47:29.399768 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.399847 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.399876 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:47:29.399953 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.399954 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:47:29.400030 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.400052 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:47:29.400127 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:47:29.433425 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I0725 12:47:29.433488 32469 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0725 12:47:29.596262 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0725 12:47:29.604094 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0725 12:47:29.610719 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0725 12:47:29.618917 32469 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0725 12:47:29.681154 32469 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0725 12:47:29.743519 32469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0725 12:47:29.811526 32469 ssh_runner.go:195] Run: sudo systemctl restart docker
I0725 12:47:31.054001 32469 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.242480525s)
I0725 12:47:31.054059 32469 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:31.084519 32469 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0725 12:47:31.154286 32469 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
I0725 12:47:31.154426 32469 ssh_runner.go:195] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts
I0725 12:47:31.158337 32469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.64.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0725 12:47:31.164342 32469 localpath.go:92] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/client.crt -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt
I0725 12:47:31.164602 32469 localpath.go:117] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/client.key -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.key
I0725 12:47:31.164878 32469 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
I0725 12:47:31.164922 32469 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0725 12:47:31.187082 32469 docker.go:611] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.17.0
k8s.gcr.io/kube-controller-manager:v1.17.0
k8s.gcr.io/kube-apiserver:v1.17.0
k8s.gcr.io/kube-scheduler:v1.17.0
kubernetesui/dashboard:v2.0.0-beta8
k8s.gcr.io/coredns:1.6.5
k8s.gcr.io/etcd:3.4.3-0
kubernetesui/metrics-scraper:v1.0.2
k8s.gcr.io/kube-addon-manager:v9.0.2
k8s.gcr.io/pause:3.1
gcr.io/k8s-minikube/storage-provisioner:v1.8.1
-- /stdout --
I0725 12:47:31.187094 32469 docker.go:617] gcr.io/k8s-minikube/storage-provisioner:v5 wasn't preloaded
I0725 12:47:31.187102 32469 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
I0725 12:47:31.193719 32469 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.194122 32469 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.194532 32469 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.194866 32469 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.195101 32469 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.195598 32469 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
I0725 12:47:31.195961 32469 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
I0725 12:47:31.196248 32469 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.200862 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.202444 32469 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.202458 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.202580 32469 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.203086 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.203751 32469 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.203755 32469 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
I0725 12:47:31.203879 32469 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
I0725 12:47:29.022465 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.022564 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.032625 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.222418 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.222483 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.231030 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.422278 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.422342 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.431624 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.622300 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.622383 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.631597 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:29.823351 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:29.823415 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:29.832384 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.023364 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.023457 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.033811 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.222362 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.222493 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.232730 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.423149 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.423347 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.434769 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.623840 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.623975 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.634089 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.823171 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.823233 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.832208 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.832219 32449 api_server.go:165] Checking apiserver status ...
I0725 12:47:30.832277 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0725 12:47:30.841016 32449 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0725 12:47:30.841029 32449 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
I0725 12:47:30.841038 32449 kubeadm.go:1092] stopping kube-system containers ...
I0725 12:47:30.841091 32449 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:30.863318 32449 docker.go:443] Stopping containers: [8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd]
I0725 12:47:30.863399 32449 ssh_runner.go:195] Run: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd
I0725 12:47:31.741999 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
I0725 12:47:31.742484 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
I0725 12:47:31.757587 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
I0725 12:47:31.793489 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
I0725 12:47:31.847759 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
I0725 12:47:31.890670 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
I0725 12:47:31.892570 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.917394 32469 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
I0725 12:47:31.920149 32469 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
I0725 12:47:31.920179 32469 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.920217 32469 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:31.944492 32469 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
I0725 12:47:31.944604 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
I0725 12:47:31.947255 32469 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
I0725 12:47:31.947274 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
I0725 12:47:31.984214 32469 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
I0725 12:47:31.984231 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
I0725 12:47:32.434491 32469 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
I0725 12:47:32.434522 32469 cache_images.go:123] Successfully loaded all cached images
I0725 12:47:32.434526 32469 cache_images.go:92] LoadImages completed in 1.247441451s
I0725 12:47:32.434591 32469 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0725 12:47:32.463243 32469 cni.go:95] Creating CNI manager for ""
I0725 12:47:32.463254 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:32.463267 32469 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0725 12:47:32.463281 32469 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.64.22 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220725124546-24757 NodeName:running-upgrade-20220725124546-24757 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.64.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.64.22 CgroupDriver:cgroupfs C
lientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0725 12:47:32.463377 32469 kubeadm.go:162] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.22
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "running-upgrade-20220725124546-24757"
kubeletExtraArgs:
node-ip: 192.168.64.22
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.64.22"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0725 12:47:32.463433 32469 kubeadm.go:961] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=running-upgrade-20220725124546-24757 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.64.22
[Install]
config:
{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0725 12:47:32.463470 32469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
I0725 12:47:32.467639 32469 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
Initiating transfer...
I0725 12:47:32.467680 32469 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
I0725 12:47:32.472165 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256
I0725 12:47:32.472175 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256
I0725 12:47:32.472169 32469 binary.go:76] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
I0725 12:47:32.472210 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:47:32.472263 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl
I0725 12:47:32.472269 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubeadm
I0725 12:47:32.475825 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
I0725 12:47:32.475922 32469 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
I0725 12:47:32.475936 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
I0725 12:47:32.492511 32469 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
I0725 12:47:32.640658 32469 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubelet
I0725 12:47:32.765279 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
I0725 12:47:33.643113 32469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0725 12:47:33.647421 32469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
I0725 12:47:33.654434 32469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0725 12:47:33.661705 32469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2082 bytes)
I0725 12:47:33.668711 32469 ssh_runner.go:195] Run: grep 192.168.64.22 control-plane.minikube.internal$ /etc/hosts
I0725 12:47:33.671596 32469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.64.22 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0725 12:47:33.677578 32469 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles for IP: 192.168.64.22
I0725 12:47:33.677759 32469 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key
I0725 12:47:33.677842 32469 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key
I0725 12:47:33.677936 32469 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/client.key
I0725 12:47:33.677962 32469 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd
I0725 12:47:33.677981 32469 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd with IP's: [192.168.64.22 10.96.0.1 127.0.0.1 10.0.0.1]
I0725 12:47:33.842821 32469 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd ...
I0725 12:47:33.842838 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd: {Name:mk89e1dc262be7bd639c97350ec09a1a385b9a32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:33.843138 32469 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd ...
I0725 12:47:33.843146 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd: {Name:mk04002e8104c502ea4395fb47fabe2ccb2a61c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:33.843337 32469 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt.4bcc73dd -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt
I0725 12:47:33.843524 32469 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key.4bcc73dd -> /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key
I0725 12:47:33.843740 32469 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.key
I0725 12:47:33.843922 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem (1338 bytes)
W0725 12:47:33.843961 32469 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757_empty.pem, impossibly tiny 0 bytes
I0725 12:47:33.843971 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca-key.pem (1679 bytes)
I0725 12:47:33.844003 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/ca.pem (1078 bytes)
I0725 12:47:33.844032 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/cert.pem (1123 bytes)
I0725 12:47:33.844059 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/key.pem (1679 bytes)
I0725 12:47:33.844126 32469 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem (1708 bytes)
I0725 12:47:33.844650 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0725 12:47:33.854538 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0725 12:47:33.863826 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0725 12:47:33.873420 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0725 12:47:33.882368 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0725 12:47:33.891411 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0725 12:47:33.901467 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0725 12:47:33.910211 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0725 12:47:33.919862 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0725 12:47:33.929345 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/certs/24757.pem --> /usr/share/ca-certificates/24757.pem (1338 bytes)
I0725 12:47:33.938668 32469 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/files/etc/ssl/certs/247572.pem --> /usr/share/ca-certificates/247572.pem (1708 bytes)
I0725 12:47:33.947792 32469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
I0725 12:47:33.954304 32469 ssh_runner.go:195] Run: openssl version
I0725 12:47:33.957730 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0725 12:47:33.962477 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.965403 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:54 /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.965444 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0725 12:47:33.973126 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0725 12:47:33.977142 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/24757.pem && ln -fs /usr/share/ca-certificates/24757.pem /etc/ssl/certs/24757.pem"
I0725 12:47:33.982074 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/24757.pem
I0725 12:47:33.984977 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:57 /usr/share/ca-certificates/24757.pem
I0725 12:47:33.985020 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/24757.pem
I0725 12:47:33.992835 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/24757.pem /etc/ssl/certs/51391683.0"
I0725 12:47:33.997177 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/247572.pem && ln -fs /usr/share/ca-certificates/247572.pem /etc/ssl/certs/247572.pem"
I0725 12:47:34.002260 32469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/247572.pem
I0725 12:47:34.005351 32469 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:57 /usr/share/ca-certificates/247572.pem
I0725 12:47:34.005391 32469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/247572.pem
I0725 12:47:34.013100 32469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/247572.pem /etc/ssl/certs/3ec20f2e.0"
I0725 12:47:34.017828 32469 kubeadm.go:395] StartCluster: {Name:running-upgrade-20220725124546-24757 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/14534/minikube-v1.26.0-1657340101-14534-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperkit Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 Kubernet
esConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
I0725 12:47:34.017910 32469 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:34.038771 32469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0725 12:47:34.043395 32469 kubeadm.go:410] found existing configuration files, will attempt cluster restart
I0725 12:47:34.043428 32469 kubeadm.go:626] restartCluster start
I0725 12:47:34.043470 32469 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0725 12:47:34.047753 32469 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.048162 32469 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220725124546-24757" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:34.048333 32469 kubeconfig.go:127] "running-upgrade-20220725124546-24757" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig - will repair!
I0725 12:47:34.048691 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:34.049575 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:34.050045 32469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0725 12:47:34.054115 32469 kubeadm.go:593] needs reconfigure: configs differ:
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -1,4 +1,4 @@
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.22
@@ -12,32 +12,63 @@
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
- name: minikube
+ name: "running-upgrade-20220725124546-24757"
+ kubeletExtraArgs:
+ node-ip: 192.168.64.22
taints: []
---
-apiVersion: kubeadm.k8s.io/v1beta1
+apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
+ certSANs: ["127.0.0.1", "localhost", "192.168.64.22"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+controllerManager:
+ extraArgs:
+ allocate-node-cidrs: "true"
+ leader-elect: "false"
+scheduler:
+ extraArgs:
+ leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
-clusterName: kubernetes
-controlPlaneEndpoint: localhost:8443
+clusterName: mk
+controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
+ extraArgs:
+ proxy-refresh-interval: "70000"
kubernetesVersion: v1.17.0
networking:
dnsDomain: cluster.local
- podSubnet: ""
+ podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
+authentication:
+ x509:
+ clientCAFile: /var/lib/minikube/certs/ca.crt
+cgroupDriver: cgroupfs
+clusterDomain: "cluster.local"
+# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
+failSwapOn: false
+staticPodPath: /etc/kubernetes/manifests
+---
+apiVersion: kubeproxy.config.k8s.io/v1alpha1
+kind: KubeProxyConfiguration
+clusterCIDR: "10.244.0.0/16"
+metricsBindAddress: 0.0.0.0:10249
+conntrack:
+ maxPerCore: 0
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
+ tcpEstablishedTimeout: 0s
+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
+ tcpCloseWaitTimeout: 0s
-- /stdout --
I0725 12:47:34.054129 32469 kubeadm.go:1092] stopping kube-system containers ...
I0725 12:47:34.054189 32469 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0725 12:47:34.076347 32469 docker.go:443] Stopping containers: [4367743c93ba 02c56a28708a 557def1c077d a38b11b910ca faeff5a354ad fda86db631da 8f3fe5d92c6b 9760fca15e21 d3200f5d0b91 7f0c019b74b5]
I0725 12:47:34.076413 32469 ssh_runner.go:195] Run: docker stop 4367743c93ba 02c56a28708a 557def1c077d a38b11b910ca faeff5a354ad fda86db631da 8f3fe5d92c6b 9760fca15e21 d3200f5d0b91 7f0c019b74b5
I0725 12:47:34.098775 32469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0725 12:47:34.105809 32469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0725 12:47:34.110083 32469 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5625 Jul 25 19:46 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jul 25 19:46 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1981 Jul 25 19:47 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5605 Jul 25 19:46 /etc/kubernetes/scheduler.conf
I0725 12:47:34.110192 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0725 12:47:34.114184 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.114262 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0725 12:47:34.118455 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0725 12:47:34.122212 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.122248 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0725 12:47:34.126143 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0725 12:47:34.129958 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.129994 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0725 12:47:34.134006 32469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0725 12:47:34.137832 32469 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:34.137876 32469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0725 12:47:34.141750 32469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0725 12:47:34.146202 32469 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0725 12:47:34.146212 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:34.188027 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.124387 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.272929 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.353057 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:35.418959 32469 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:47:35.419055 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:35.930921 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:36.429686 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:36.929679 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:37.431210 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:37.929517 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:38.430454 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:38.929930 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:39.429482 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:39.929596 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.431467 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.929623 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:41.429598 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:40.471014 32449 ssh_runner.go:235] Completed: docker stop 8abc60a3d366 148739a1c8bf fa5cdb6bc0bc 8c03efc958d7 6c4c14ed6c7b ac68acceae4b 82a2874088cf 7c07ade5b55e 4bbd9292ccc1 aa9e0a649a58 fdfac1f68e49 8dc345a99c84 aafbd0b5739c d384999d8139 e7fcb68ce522 1d34b4b583f3 ca566d073d10 fe2463f8ebca 158fd90c2011 7f322c094fe0 790ec96bc26e d77f856d3f70 4557e254cdb1 0d48674bc4e3 759e7d05bfbd: (9.607781038s)
I0725 12:47:40.471069 32449 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0725 12:47:40.497793 32449 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0725 12:47:40.504564 32449 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Jul 25 19:46 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Jul 25 19:46 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 2043 Jul 25 19:46 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Jul 25 19:46 /etc/kubernetes/scheduler.conf
I0725 12:47:40.504613 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0725 12:47:40.510944 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0725 12:47:40.517741 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0725 12:47:40.523731 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.523765 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0725 12:47:40.529929 32449 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0725 12:47:40.535794 32449 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0725 12:47:40.535826 32449 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0725 12:47:40.542016 32449 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548372 32449 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0725 12:47:40.548382 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:40.585742 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.045264 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.234558 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.280909 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:41.330190 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:47:41.330251 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:41.839951 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.339855 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.838994 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.849968 32449 api_server.go:71] duration metric: took 1.519809905s to wait for apiserver process to appear ...
I0725 12:47:42.849984 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:47:42.849997 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:41.929449 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.429466 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:42.930789 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:43.431399 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:43.929565 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:44.431573 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:44.931575 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:45.429633 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:45.929405 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:46.429677 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.258116 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0725 12:47:47.258131 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0725 12:47:47.760362 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:47.766179 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:47.766196 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.258229 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.262225 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0725 12:47:48.262237 32449 api_server.go:102] status: https://192.168.64.23:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0725 12:47:48.760315 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:47:48.765926 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:47:48.771384 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:47:48.771425 32449 api_server.go:130] duration metric: took 5.921533487s to wait for apiserver health ...
I0725 12:47:48.771438 32449 cni.go:95] Creating CNI manager for ""
I0725 12:47:48.771458 32449 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:47:48.771481 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:47:48.776990 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:47:48.777004 32449 system_pods.go:61] "coredns-6d4b75cb6d-rglh7" [bfdceddb-f0ec-481c-a4a2-ce56bb133d27] Running
I0725 12:47:48.777010 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0725 12:47:48.777018 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0725 12:47:48.777023 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:47:48.777029 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I0725 12:47:48.777034 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:47:48.777038 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:47:48.777042 32449 system_pods.go:74] duration metric: took 5.556495ms to wait for pod list to return data ...
I0725 12:47:48.777048 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:47:48.779296 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:47:48.779312 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:47:48.779321 32449 node_conditions.go:105] duration metric: took 2.26989ms to run NodePressure ...
I0725 12:47:48.779331 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:47:48.896760 32449 kubeadm.go:762] waiting for restarted kubelet to initialise ...
I0725 12:47:48.899954 32449 kubeadm.go:777] kubelet initialised
I0725 12:47:48.899964 32449 kubeadm.go:778] duration metric: took 3.186627ms waiting for restarted kubelet to initialise ...
I0725 12:47:48.899971 32449 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:48.903437 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907836 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:48.907844 32449 pod_ready.go:81] duration metric: took 4.397671ms waiting for pod "coredns-6d4b75cb6d-rglh7" in "kube-system" namespace to be "Ready" ...
I0725 12:47:48.907849 32449 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:46.929504 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.431501 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:47.931464 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:48.430664 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:48.929375 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:49.429346 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:49.929655 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.430437 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.929624 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:51.429372 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:50.917931 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:53.417934 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:51.930509 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:52.430439 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:52.930290 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:53.430033 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:53.931374 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:54.430420 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:54.931336 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.429562 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.929483 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:56.429155 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:55.419093 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:57.915085 32449 pod_ready.go:102] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"False"
I0725 12:47:58.916472 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:58.916486 32449 pod_ready.go:81] duration metric: took 10.008815507s waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:47:58.916492 32449 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431517 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.431549 32449 pod_ready.go:81] duration metric: took 515.03489ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.431556 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435193 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.435201 32449 pod_ready.go:81] duration metric: took 3.640991ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.435208 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438379 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.438387 32449 pod_ready.go:81] duration metric: took 3.174279ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.438394 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442279 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.442289 32449 pod_ready.go:81] duration metric: took 3.889821ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.442295 32449 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714855 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:47:59.714865 32449 pod_ready.go:81] duration metric: took 272.570349ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:47:59.714870 32449 pod_ready.go:38] duration metric: took 10.815102423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.714885 32449 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0725 12:47:59.722486 32449 ops.go:34] apiserver oom_adj: -16
I0725 12:47:59.722496 32449 kubeadm.go:630] restartCluster took 31.961985619s
I0725 12:47:59.722501 32449 kubeadm.go:397] StartCluster complete in 32.02516291s
I0725 12:47:59.722514 32449 settings.go:142] acquiring lock: {Name:mkd3ca246a72d4c75785a7cc650cfc3c06de2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.722609 32449 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:47:59.723211 32449 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:47:59.724153 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.726081 32449 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220725124607-24757" rescaled to 1
I0725 12:47:59.726118 32449 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0725 12:47:59.726114 32449 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.64.23 Port:8443 KubernetesVersion:v1.24.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0725 12:47:59.726141 32449 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0725 12:47:59.768656 32449 out.go:177] * Verifying Kubernetes components...
I0725 12:47:59.726174 32449 addons.go:65] Setting storage-provisioner=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726175 32449 addons.go:65] Setting default-storageclass=true in profile "pause-20220725124607-24757"
I0725 12:47:59.726311 32449 config.go:178] Loaded profile config "pause-20220725124607-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.24.2
I0725 12:47:59.787218 32449 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0725 12:47:59.789703 32449 addons.go:153] Setting addon storage-provisioner=true in "pause-20220725124607-24757"
I0725 12:47:59.789706 32449 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220725124607-24757"
W0725 12:47:59.789715 32449 addons.go:162] addon storage-provisioner should already be in state true
I0725 12:47:59.789742 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:47:59.789751 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.790020 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790041 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.790044 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.790058 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.797714 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51473
I0725 12:47:59.798103 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51475
I0725 12:47:59.798272 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798435 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.798707 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798727 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798816 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.798829 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.798980 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799060 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.799231 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.799341 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.799436 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.799441 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.799466 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.802301 32449 kapi.go:59] client config for pause-20220725124607-24757: &rest.Config{Host:"https://192.168.64.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/pause-20220725124607-247
57/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:47:59.807605 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51477
I0725 12:47:59.808268 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.808662 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.808673 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.808961 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.809100 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.809218 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.809345 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.810223 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.810493 32449 addons.go:153] Setting addon default-storageclass=true in "pause-20220725124607-24757"
I0725 12:47:59.831555 32449 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:47:59.816767 32449 node_ready.go:35] waiting up to 6m0s for node "pause-20220725124607-24757" to be "Ready" ...
W0725 12:47:59.831555 32449 addons.go:162] addon default-storageclass should already be in state true
I0725 12:47:59.852810 32449 host.go:66] Checking if "pause-20220725124607-24757" exists ...
I0725 12:47:59.852826 32449 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:47:59.852835 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0725 12:47:59.852853 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.853049 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.853161 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.853180 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.853219 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.853264 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.853659 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.861671 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51480
I0725 12:47:59.862254 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.862795 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.862844 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.863107 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.863739 32449 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:47:59.863796 32449 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:47:59.871393 32449 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51482
I0725 12:47:59.871804 32449 main.go:134] libmachine: () Calling .GetVersion
I0725 12:47:59.872263 32449 main.go:134] libmachine: Using API Version 1
I0725 12:47:59.872295 32449 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:47:59.872592 32449 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:47:59.872763 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetState
I0725 12:47:59.872884 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:47:59.872977 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | hyperkit pid from json: 32352
I0725 12:47:59.874096 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .DriverName
I0725 12:47:59.874327 32449 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.874337 32449 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0725 12:47:59.874346 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHHostname
I0725 12:47:59.874451 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHPort
I0725 12:47:59.874572 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHKeyPath
I0725 12:47:59.874685 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .GetSSHUsername
I0725 12:47:59.874778 32449 sshutil.go:53] new ssh client: &{IP:192.168.64.23 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/pause-20220725124607-24757/id_rsa Username:docker}
I0725 12:47:59.915855 32449 node_ready.go:49] node "pause-20220725124607-24757" has status "Ready":"True"
I0725 12:47:59.915866 32449 node_ready.go:38] duration metric: took 63.170605ms waiting for node "pause-20220725124607-24757" to be "Ready" ...
I0725 12:47:59.915875 32449 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:47:59.938916 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0725 12:47:59.970519 32449 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:00.117232 32449 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513514 32449 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.513523 32449 pod_ready.go:81] duration metric: took 396.286746ms waiting for pod "coredns-6d4b75cb6d-wnp4h" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.513529 32449 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.539195 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539210 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539198 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539241 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539416 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539417 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539425 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539420 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539436 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539437 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539457 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539460 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539463 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539466 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539639 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539643 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539654 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539655 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539656 32449 main.go:134] libmachine: (pause-20220725124607-24757) DBG | Closing plugin on server side
I0725 12:48:00.539667 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.539671 32449 main.go:134] libmachine: Making call to close driver server
I0725 12:48:00.539682 32449 main.go:134] libmachine: (pause-20220725124607-24757) Calling .Close
I0725 12:48:00.539820 32449 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:00.539830 32449 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:00.563090 32449 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0725 12:47:56.930830 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:57.431141 32469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:47:57.437183 32469 api_server.go:71] duration metric: took 22.018659349s to wait for apiserver process to appear ...
I0725 12:47:57.437203 32469 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:47:57.437218 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:00.653801 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0725 12:48:00.653817 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0725 12:48:01.154874 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:01.160888 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0725 12:48:01.160903 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0725 12:48:01.654077 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:01.658643 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0725 12:48:01.658662 32469 api_server.go:102] status: https://192.168.64.22:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0725 12:48:02.155582 32469 api_server.go:240] Checking apiserver healthz at https://192.168.64.22:8443/healthz ...
I0725 12:48:02.161203 32469 api_server.go:266] https://192.168.64.22:8443/healthz returned 200:
ok
I0725 12:48:02.165958 32469 api_server.go:140] control plane version: v1.17.0
I0725 12:48:02.165972 32469 api_server.go:130] duration metric: took 4.72885577s to wait for apiserver health ...
I0725 12:48:02.165978 32469 cni.go:95] Creating CNI manager for ""
I0725 12:48:02.165982 32469 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I0725 12:48:02.165991 32469 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:48:02.170002 32469 system_pods.go:59] 4 kube-system pods found
I0725 12:48:02.170018 32469 system_pods.go:61] "coredns-6955765f44-5jfdg" [5020da1b-6a45-4b39-802d-5c9520158377] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170023 32469 system_pods.go:61] "coredns-6955765f44-gnd7x" [0b93954b-6f29-427e-bd15-676a6271e58c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170047 32469 system_pods.go:61] "kube-proxy-fw74h" [696567b4-f041-40e0-9649-7fdddfa70df2] Pending
I0725 12:48:02.170051 32469 system_pods.go:61] "storage-provisioner" [af16d783-2ed9-45b6-ac15-a47946381e08] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
I0725 12:48:02.170056 32469 system_pods.go:74] duration metric: took 4.061205ms to wait for pod list to return data ...
I0725 12:48:02.170062 32469 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:02.172207 32469 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0725 12:48:02.172219 32469 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:02.172226 32469 node_conditions.go:105] duration metric: took 2.160948ms to run NodePressure ...
I0725 12:48:02.172241 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0725 12:48:02.318381 32469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0725 12:48:02.325026 32469 ops.go:34] apiserver oom_adj: -16
I0725 12:48:02.325035 32469 kubeadm.go:630] restartCluster took 28.282153975s
I0725 12:48:02.325041 32469 kubeadm.go:397] StartCluster complete in 28.307791297s
I0725 12:48:02.325055 32469 settings.go:142] acquiring lock: {Name:mkd3ca246a72d4c75785a7cc650cfc3c06de2b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:48:02.325122 32469 settings.go:150] Updating kubeconfig: /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig
I0725 12:48:02.326257 32469 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/kubeconfig: {Name:mkf13cdaa6d8207dd8a8820ce636cc1aacc67288 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0725 12:48:02.327261 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:48:02.837236 32469 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "running-upgrade-20220725124546-24757" rescaled to 1
I0725 12:48:02.837279 32469 start.go:211] Will wait 6m0s for node &{Name:minikube IP:192.168.64.22 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0725 12:48:02.837323 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0725 12:48:02.837381 32469 addons.go:412] enableAddons start: toEnable=map[], additional=[]
I0725 12:48:02.837464 32469 config.go:178] Loaded profile config "running-upgrade-20220725124546-24757": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.17.0
I0725 12:48:02.858837 32469 addons.go:65] Setting default-storageclass=true in profile "running-upgrade-20220725124546-24757"
I0725 12:48:02.858847 32469 addons.go:65] Setting storage-provisioner=true in profile "running-upgrade-20220725124546-24757"
I0725 12:48:02.858871 32469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-20220725124546-24757"
I0725 12:48:02.858713 32469 out.go:177] * Verifying Kubernetes components...
I0725 12:48:02.858893 32469 addons.go:153] Setting addon storage-provisioner=true in "running-upgrade-20220725124546-24757"
W0725 12:48:02.858912 32469 addons.go:162] addon storage-provisioner should already be in state true
I0725 12:48:02.859522 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.895683 32469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:48:02.895717 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.895724 32469 host.go:66] Checking if "running-upgrade-20220725124546-24757" exists ...
I0725 12:48:02.897226 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.897825 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.902736 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51490
I0725 12:48:02.903140 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.903524 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.903535 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.903754 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.903848 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.903941 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.904026 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.904347 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51492
I0725 12:48:02.904625 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.904936 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.904954 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.905157 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.905503 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.905553 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.905568 32469 kapi.go:59] client config for running-upgrade-20220725124546-24757: &rest.Config{Host:"https://192.168.64.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/running-upgrade-20220725124546-24757/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/profiles/runn
ing-upgrade-20220725124546-24757/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fcfe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0725 12:48:02.911425 32469 addons.go:153] Setting addon default-storageclass=true in "running-upgrade-20220725124546-24757"
W0725 12:48:02.911442 32469 addons.go:162] addon default-storageclass should already be in state true
I0725 12:48:02.911462 32469 host.go:66] Checking if "running-upgrade-20220725124546-24757" exists ...
I0725 12:48:02.911756 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.911789 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.913915 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51494
I0725 12:48:02.914326 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.914784 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.914798 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.914994 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.915089 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.915170 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.915252 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.916056 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:48:02.918394 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51496
I0725 12:48:02.937579 32469 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0725 12:48:00.621161 32449 addons.go:414] enableAddons completed in 895.045234ms
I0725 12:48:00.914536 32449 pod_ready.go:92] pod "etcd-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:00.914568 32449 pod_ready.go:81] duration metric: took 401.042289ms waiting for pod "etcd-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:00.914575 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315399 32449 pod_ready.go:92] pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.315410 32449 pod_ready.go:81] duration metric: took 400.837301ms waiting for pod "kube-apiserver-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.315417 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713652 32449 pod_ready.go:92] pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:01.713662 32449 pod_ready.go:81] duration metric: took 398.24262ms waiting for pod "kube-controller-manager-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:01.713669 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116833 32449 pod_ready.go:92] pod "kube-proxy-vvgjh" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.116846 32449 pod_ready.go:81] duration metric: took 403.180188ms waiting for pod "kube-proxy-vvgjh" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.116857 32449 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514872 32449 pod_ready.go:92] pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace has status "Ready":"True"
I0725 12:48:02.514885 32449 pod_ready.go:81] duration metric: took 398.015294ms waiting for pod "kube-scheduler-pause-20220725124607-24757" in "kube-system" namespace to be "Ready" ...
I0725 12:48:02.514892 32449 pod_ready.go:38] duration metric: took 2.599056789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0725 12:48:02.514914 32449 api_server.go:51] waiting for apiserver process to appear ...
I0725 12:48:02.514971 32449 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0725 12:48:02.524797 32449 api_server.go:71] duration metric: took 2.798697005s to wait for apiserver process to appear ...
I0725 12:48:02.524812 32449 api_server.go:87] waiting for apiserver healthz status ...
I0725 12:48:02.524819 32449 api_server.go:240] Checking apiserver healthz at https://192.168.64.23:8443/healthz ...
I0725 12:48:02.528761 32449 api_server.go:266] https://192.168.64.23:8443/healthz returned 200:
ok
I0725 12:48:02.529297 32449 api_server.go:140] control plane version: v1.24.2
I0725 12:48:02.529305 32449 api_server.go:130] duration metric: took 4.48935ms to wait for apiserver health ...
I0725 12:48:02.529310 32449 system_pods.go:43] waiting for kube-system pods to appear ...
I0725 12:48:02.717715 32449 system_pods.go:59] 7 kube-system pods found
I0725 12:48:02.717729 32449 system_pods.go:61] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:02.717733 32449 system_pods.go:61] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:02.717739 32449 system_pods.go:61] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:02.717743 32449 system_pods.go:61] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:02.717746 32449 system_pods.go:61] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:02.717750 32449 system_pods.go:61] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:02.717753 32449 system_pods.go:61] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:02.717757 32449 system_pods.go:74] duration metric: took 188.447508ms to wait for pod list to return data ...
I0725 12:48:02.717768 32449 default_sa.go:34] waiting for default service account to be created ...
I0725 12:48:02.914666 32449 default_sa.go:45] found service account: "default"
I0725 12:48:02.914676 32449 default_sa.go:55] duration metric: took 196.907597ms for default service account to be created ...
I0725 12:48:02.914681 32449 system_pods.go:116] waiting for k8s-apps to be running ...
I0725 12:48:03.116281 32449 system_pods.go:86] 7 kube-system pods found
I0725 12:48:03.116295 32449 system_pods.go:89] "coredns-6d4b75cb6d-wnp4h" [6b4a2096-027b-40d7-8f3f-f2e78d7f76c7] Running
I0725 12:48:03.116300 32449 system_pods.go:89] "etcd-pause-20220725124607-24757" [7d7af23c-8431-4e43-add5-9213ceac0862] Running
I0725 12:48:03.116304 32449 system_pods.go:89] "kube-apiserver-pause-20220725124607-24757" [af42ac19-2758-4cc0-acf5-29f09c593579] Running
I0725 12:48:03.116307 32449 system_pods.go:89] "kube-controller-manager-pause-20220725124607-24757" [c987293e-fdec-460c-bac5-779ee584bf14] Running
I0725 12:48:03.116311 32449 system_pods.go:89] "kube-proxy-vvgjh" [cc6970ad-eca0-464d-a5c0-5eecee54875c] Running
I0725 12:48:03.116314 32449 system_pods.go:89] "kube-scheduler-pause-20220725124607-24757" [540dd4b3-4c77-47ac-a07c-1de4714e62cf] Running
I0725 12:48:03.116319 32449 system_pods.go:89] "storage-provisioner" [7d189436-f57b-4db0-a2c3-534d702f468f] Running
I0725 12:48:03.116334 32449 system_pods.go:126] duration metric: took 201.650654ms to wait for k8s-apps to be running ...
I0725 12:48:03.116348 32449 system_svc.go:44] waiting for kubelet service to be running ....
I0725 12:48:03.116413 32449 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0725 12:48:03.126610 32449 system_svc.go:56] duration metric: took 10.263673ms WaitForService to wait for kubelet.
I0725 12:48:03.126626 32449 kubeadm.go:572] duration metric: took 3.400540205s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0725 12:48:03.126644 32449 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:03.314389 32449 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
I0725 12:48:03.314403 32449 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:03.314410 32449 node_conditions.go:105] duration metric: took 187.766416ms to run NodePressure ...
I0725 12:48:03.314435 32449 start.go:216] waiting for startup goroutines ...
I0725 12:48:03.348116 32449 start.go:506] kubectl: 1.24.1, cluster: 1.24.2 (minor skew: 0)
I0725 12:48:02.938104 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.953776 32469 kubeadm.go:509] skip waiting for components based on config.
I0725 12:48:02.958690 32469 node_conditions.go:102] verifying NodePressure condition ...
I0725 12:48:02.953806 32469 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.64.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0725 12:48:02.958753 32469 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:02.958763 32469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0725 12:48:02.958775 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:48:02.958913 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:48:02.959060 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:48:02.959095 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.959105 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.959179 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:48:02.959273 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:48:02.959303 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.959664 32469 main.go:134] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0725 12:48:02.959688 32469 main.go:134] libmachine: Launching plugin server for driver hyperkit
I0725 12:48:02.961635 32469 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
I0725 12:48:02.961655 32469 node_conditions.go:123] node cpu capacity is 2
I0725 12:48:02.961666 32469 node_conditions.go:105] duration metric: took 2.969466ms to run NodePressure ...
I0725 12:48:02.961676 32469 start.go:216] waiting for startup goroutines ...
I0725 12:48:02.966514 32469 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:51499
I0725 12:48:02.966876 32469 main.go:134] libmachine: () Calling .GetVersion
I0725 12:48:02.967251 32469 main.go:134] libmachine: Using API Version 1
I0725 12:48:02.967267 32469 main.go:134] libmachine: () Calling .SetConfigRaw
I0725 12:48:02.967478 32469 main.go:134] libmachine: () Calling .GetMachineName
I0725 12:48:02.967572 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetState
I0725 12:48:02.967663 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0725 12:48:02.967745 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | hyperkit pid from json: 32308
I0725 12:48:02.968571 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .DriverName
I0725 12:48:02.968741 32469 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
I0725 12:48:02.968748 32469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0725 12:48:02.968760 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHHostname
I0725 12:48:02.968840 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHPort
I0725 12:48:02.968961 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHKeyPath
I0725 12:48:02.969049 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .GetSSHUsername
I0725 12:48:02.969156 32469 sshutil.go:53] new ssh client: &{IP:192.168.64.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-hyperkit--14555-23603-0a167c9e2958e27b8ab0e3c17b04ac7cefde8636/.minikube/machines/running-upgrade-20220725124546-24757/id_rsa Username:docker}
I0725 12:48:03.048242 32469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0725 12:48:03.052162 32469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0725 12:48:03.300822 32469 start.go:809] {"host.minikube.internal": 192.168.64.1} host record injected into CoreDNS
I0725 12:48:03.342252 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.342267 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.342566 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.342567 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.342579 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.342593 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.342602 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.342798 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.342807 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.342813 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355606 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355618 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.355775 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355776 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.355788 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.355795 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355802 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.355940 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.355954 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.355959 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.355973 32469 main.go:134] libmachine: Making call to close driver server
I0725 12:48:03.355985 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) Calling .Close
I0725 12:48:03.356133 32469 main.go:134] libmachine: Successfully made call to close driver server
I0725 12:48:03.356142 32469 main.go:134] libmachine: Making call to close connection to plugin binary
I0725 12:48:03.356142 32469 main.go:134] libmachine: (running-upgrade-20220725124546-24757) DBG | Closing plugin on server side
I0725 12:48:03.423611 32449 out.go:177] * Done! kubectl is now configured to use "pause-20220725124607-24757" cluster and "default" namespace by default
I0725 12:48:03.498891 32469 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0725 12:48:03.573678 32469 addons.go:414] enableAddons completed in 736.324707ms
I0725 12:48:03.606529 32469 start.go:506] kubectl: 1.24.1, cluster: 1.17.0 (minor skew: 7)
I0725 12:48:03.643611 32469 out.go:177]
W0725 12:48:03.680965 32469 out.go:239] ! /usr/local/bin/kubectl is version 1.24.1, which may have incompatibilites with Kubernetes 1.17.0.
I0725 12:48:03.702704 32469 out.go:177] - Want kubectl v1.17.0? Try 'minikube kubectl -- get pods -A'
I0725 12:48:03.744700 32469 out.go:177] * Done! kubectl is now configured to use "running-upgrade-20220725124546-24757" cluster and "" namespace by default
*
* ==> Docker <==
* -- Journal begins at Mon 2022-07-25 19:46:16 UTC, ends at Mon 2022-07-25 19:48:09 UTC. --
Jul 25 19:47:43 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:43.316448750Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a51243e160348a9c7d895ff4b74f6db59fc3dee2a3ffb5381b3058049f35d0ca pid=5315 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288094287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288157378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288166564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.288579698Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0b271d8323e469e9425826e335d92e59256ebd75ce42f8009b7a7279eefc07da pid=5509 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608781933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608856088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608865061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.608968379Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bcc7e24af2f787666205ec9176991752cc04dba24e846dea67461ab2186560da pid=5555 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.703930249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704080973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704139164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.704374110Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fceb0f5d9dc7201a35c92079ae95fed690deec2aa8c7e3005763dec6094d8a75 pid=5605 runtime=io.containerd.runc.v2
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764529291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764691208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764748598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:47:49 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:47:49.764901997Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0dbd38c4ed4c42601d566168de86ef6e6b28cc24e4ae6eb8cf09a49921cd8491 pid=5642 runtime=io.containerd.runc.v2
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208204985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208247966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208260928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.208617795Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/75583aa7957ba5b117c984936a2c407dab53b4eac952fb60df2da647aab86e92 pid=5919 runtime=io.containerd.runc.v2
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497457709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497537943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497546923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 25 19:48:01 pause-20220725124607-24757 dockerd[3660]: time="2022-07-25T19:48:01.497968399Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f4d3b9b8fc44720aaf0a35ed3cd4bd0adbc8ef91a113205bdaaaa98a79defe00 pid=5962 runtime=io.containerd.runc.v2
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f4d3b9b8fc447 6e38f40d628db 8 seconds ago Running storage-provisioner 0 75583aa7957ba
0dbd38c4ed4c4 a634548d10b03 20 seconds ago Running kube-proxy 2 bcc7e24af2f78
fceb0f5d9dc72 a4ca41631cc7a 20 seconds ago Running coredns 2 0b271d8323e46
a51243e160348 5d725196c1f47 26 seconds ago Running kube-scheduler 2 5e9442d88e3be
0964c918df2bc aebe758cef4cd 27 seconds ago Running etcd 2 faca62db02339
974594e52480a 34cdf99b1bb3b 27 seconds ago Running kube-controller-manager 2 806492f0a2c24
7249d3d37a7d2 d3377ffb7177c 27 seconds ago Running kube-apiserver 2 4df5104a7ad6c
8abc60a3d3664 5d725196c1f47 41 seconds ago Exited kube-scheduler 1 fa5cdb6bc0bc1
148739a1c8bf7 a634548d10b03 41 seconds ago Exited kube-proxy 1 8c03efc958d74
6c4c14ed6c7bb a4ca41631cc7a 42 seconds ago Exited coredns 1 ac68acceae4b2
82a2874088cf8 34cdf99b1bb3b 54 seconds ago Exited kube-controller-manager 1 7c07ade5b55ec
4bbd9292ccc1e d3377ffb7177c 55 seconds ago Exited kube-apiserver 1 fdfac1f68e49f
aa9e0a649a58c aebe758cef4cd 55 seconds ago Exited etcd 1 8dc345a99c847
*
* ==> coredns [6c4c14ed6c7b] <==
* [INFO] SIGTERM: Shutting down servers then terminating
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
[INFO] plugin/health: Going into lameduck mode for 5s
[ERROR] plugin/errors: 2 9149239292398430472.1479233848501534760. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
[WARNING] plugin/health: Local health request to "http://:8080/health" failed: Get "http://:8080/health": dial tcp :8080: connect: connection reset by peer
[ERROR] plugin/errors: 2 9149239292398430472.1479233848501534760. HINFO: dial udp 192.168.64.1:53: connect: network is unreachable
*
* ==> coredns [fceb0f5d9dc7] <==
* .:53
[INFO] plugin/reload: Running configuration MD5 = 08e2b174e0f0a30a2e82df9c995f4a34
CoreDNS-1.8.6
linux/amd64, go1.17.1, 13a9191
*
* ==> describe nodes <==
* Name: pause-20220725124607-24757
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-20220725124607-24757
kubernetes.io/os=linux
minikube.k8s.io/commit=a5b59bcfc16aadb787d3d4f0635e06172b98dce6
minikube.k8s.io/name=pause-20220725124607-24757
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_07_25T12_46_46_0700
minikube.k8s.io/version=v1.26.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 25 Jul 2022 19:46:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-20220725124607-24757
AcquireTime: <unset>
RenewTime: Mon, 25 Jul 2022 19:48:08 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 25 Jul 2022 19:47:47 +0000 Mon, 25 Jul 2022 19:46:46 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.23
Hostname: pause-20220725124607-24757
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017588Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 2017588Ki
pods: 110
System Info:
Machine ID: 0b81cda4acc64db9b36933459060308a
System UUID: 6d8c11ed-0000-0000-b12e-149d997cd0f1
Boot ID: 3b06b3c4-9b77-48bb-ad0b-163ba01d6234
Kernel Version: 5.10.57
OS Image: Buildroot 2021.02.12
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.17
Kubelet Version: v1.24.2
Kube-Proxy Version: v1.24.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6d4b75cb6d-wnp4h 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 70s
kube-system etcd-pause-20220725124607-24757 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 84s
kube-system kube-apiserver-pause-20220725124607-24757 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 84s
kube-system kube-controller-manager-pause-20220725124607-24757 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 84s
kube-system kube-proxy-vvgjh 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 71s
kube-system kube-scheduler-pause-20220725124607-24757 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 83s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 10s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (8%!)(MISSING) 170Mi (8%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 70s kube-proxy
Normal Starting 20s kube-proxy
Normal NodeAllocatableEnforced 95s kubelet Updated Node Allocatable limit across pods
Normal Starting 95s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 95s (x4 over 95s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 95s (x3 over 95s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 95s (x3 over 95s) kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal NodeReady 84s kubelet Node pause-20220725124607-24757 status is now: NodeReady
Normal Starting 84s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 84s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 84s kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 84s kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 84s kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 71s node-controller Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller
Normal Starting 29s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 29s (x8 over 29s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 29s (x8 over 29s) kubelet Node pause-20220725124607-24757 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 29s (x7 over 29s) kubelet Node pause-20220725124607-24757 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 29s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 10s node-controller Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller
*
* ==> dmesg <==
* [ +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +1.939705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
[ +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
[ +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
[ +3.660330] systemd-fstab-generator[552]: Ignoring "noauto" for root device
[ +0.094408] systemd-fstab-generator[563]: Ignoring "noauto" for root device
[ +5.949424] systemd-fstab-generator[783]: Ignoring "noauto" for root device
[ +1.347769] kauditd_printk_skb: 16 callbacks suppressed
[ +0.237236] systemd-fstab-generator[943]: Ignoring "noauto" for root device
[ +0.086452] systemd-fstab-generator[954]: Ignoring "noauto" for root device
[ +0.092085] systemd-fstab-generator[965]: Ignoring "noauto" for root device
[ +1.398983] systemd-fstab-generator[1115]: Ignoring "noauto" for root device
[ +0.091065] systemd-fstab-generator[1126]: Ignoring "noauto" for root device
[ +3.454273] systemd-fstab-generator[1352]: Ignoring "noauto" for root device
[ +0.498420] kauditd_printk_skb: 68 callbacks suppressed
[ +11.241941] systemd-fstab-generator[2050]: Ignoring "noauto" for root device
[Jul25 19:47] kauditd_printk_skb: 7 callbacks suppressed
[ +5.470060] systemd-fstab-generator[2934]: Ignoring "noauto" for root device
[ +0.120849] systemd-fstab-generator[2945]: Ignoring "noauto" for root device
[ +0.127788] systemd-fstab-generator[2956]: Ignoring "noauto" for root device
[ +0.332668] kauditd_printk_skb: 16 callbacks suppressed
[ +20.655882] systemd-fstab-generator[4010]: Ignoring "noauto" for root device
[ +0.127786] systemd-fstab-generator[4103]: Ignoring "noauto" for root device
[ +14.271837] systemd-fstab-generator[4861]: Ignoring "noauto" for root device
[ +7.791947] kauditd_printk_skb: 31 callbacks suppressed
*
* ==> etcd [0964c918df2b] <==
* {"level":"info","ts":"2022-07-25T19:47:43.971Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"358a38a4be5dda21","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
{"level":"info","ts":"2022-07-25T19:47:43.980Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 switched to configuration voters=(3857958311015864865)"}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","added-peer-id":"358a38a4be5dda21","added-peer-peer-urls":["https://192.168.64.23:2380"]}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bf21a475ce91bca1","local-member-id":"358a38a4be5dda21","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:43.982Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-07-25T19:47:43.988Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-25T19:47:43.981Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:43.982Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:43.988Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-25T19:47:43.989Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 4"}
{"level":"info","ts":"2022-07-25T19:47:45.746Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:pause-20220725124607-24757 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-25T19:47:45.746Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:45.747Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-25T19:47:45.748Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
*
* ==> etcd [aa9e0a649a58] <==
* {"level":"info","ts":"2022-07-25T19:47:15.069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-07-25T19:47:15.070Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"358a38a4be5dda21","initial-advertise-peer-urls":["https://192.168.64.23:2380"],"listen-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.64.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-07-25T19:47:15.070Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 is starting a new election at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became pre-candidate at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgPreVoteResp from 358a38a4be5dda21 at term 2"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became candidate at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 received MsgVoteResp from 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"358a38a4be5dda21 became leader at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 358a38a4be5dda21 elected leader 358a38a4be5dda21 at term 3"}
{"level":"info","ts":"2022-07-25T19:47:16.363Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"358a38a4be5dda21","local-member-attributes":"{Name:pause-20220725124607-24757 ClientURLs:[https://192.168.64.23:2379]}","request-path":"/0/members/358a38a4be5dda21/attributes","cluster-id":"bf21a475ce91bca1","publish-timeout":"7s"}
{"level":"info","ts":"2022-07-25T19:47:16.363Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-07-25T19:47:16.370Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.64.23:2379"}
{"level":"info","ts":"2022-07-25T19:47:16.390Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-07-25T19:47:16.390Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-07-25T19:47:16.689Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2022-07-25T19:47:16.689Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220725124607-24757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"]}
WARNING: 2022/07/25 19:47:16 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
WARNING: 2022/07/25 19:47:16 [core] grpc: addrConn.createTransport failed to connect to {192.168.64.23:2379 192.168.64.23:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.64.23:2379: connect: connection refused". Reconnecting...
{"level":"info","ts":"2022-07-25T19:47:16.701Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"358a38a4be5dda21","current-leader-member-id":"358a38a4be5dda21"}
{"level":"info","ts":"2022-07-25T19:47:16.735Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:16.738Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.64.23:2380"}
{"level":"info","ts":"2022-07-25T19:47:16.738Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220725124607-24757","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.64.23:2380"],"advertise-client-urls":["https://192.168.64.23:2379"]}
*
* ==> kernel <==
* 19:48:11 up 2 min, 0 users, load average: 0.87, 0.44, 0.17
Linux pause-20220725124607-24757 5.10.57 #1 SMP Sat Jul 9 07:31:52 UTC 2022 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2021.02.12"
*
* ==> kube-apiserver [4bbd9292ccc1] <==
* W0725 19:47:21.977393 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.106201 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.134715 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.144887 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.219193 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.241381 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.259203 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.314408 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:22.401529 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:24.786402 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.181079 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.192532 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.220692 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.228908 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.439361 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.786812 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.821600 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:25.873320 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.105529 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.108720 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.124114 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.230369 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.288934 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.340338 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0725 19:47:26.486859 1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
*
* ==> kube-apiserver [7249d3d37a7d] <==
* I0725 19:47:47.552808 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0725 19:47:47.552901 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0725 19:47:47.552977 1 crd_finalizer.go:266] Starting CRDFinalizer
I0725 19:47:47.563204 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I0725 19:47:47.587654 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0725 19:47:47.588154 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0725 19:47:47.588162 1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I0725 19:47:47.649579 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I0725 19:47:47.649937 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0725 19:47:47.651903 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I0725 19:47:47.652260 1 cache.go:39] Caches are synced for autoregister controller
I0725 19:47:47.652539 1 apf_controller.go:322] Running API Priority and Fairness config worker
I0725 19:47:47.652601 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0725 19:47:47.684977 1 shared_informer.go:262] Caches are synced for node_authorizer
I0725 19:47:47.689739 1 shared_informer.go:262] Caches are synced for crd-autoregister
I0725 19:47:48.303009 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0725 19:47:48.551057 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0725 19:47:49.169392 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I0725 19:47:49.179455 1 controller.go:611] quota admission added evaluator for: deployments.apps
I0725 19:47:49.206963 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I0725 19:47:49.217169 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0725 19:47:49.221506 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0725 19:47:49.921186 1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
I0725 19:48:00.206294 1 controller.go:611] quota admission added evaluator for: endpoints
I0725 19:48:00.275290 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
*
* ==> kube-controller-manager [82a2874088cf] <==
* I0725 19:47:16.319977 1 serving.go:348] Generated self-signed cert in-memory
*
* ==> kube-controller-manager [974594e52480] <==
* I0725 19:48:00.216545 1 shared_informer.go:262] Caches are synced for namespace
I0725 19:48:00.216556 1 shared_informer.go:262] Caches are synced for GC
I0725 19:48:00.218926 1 shared_informer.go:262] Caches are synced for deployment
I0725 19:48:00.220397 1 shared_informer.go:262] Caches are synced for stateful set
I0725 19:48:00.224287 1 shared_informer.go:262] Caches are synced for bootstrap_signer
I0725 19:48:00.225610 1 shared_informer.go:262] Caches are synced for expand
I0725 19:48:00.227885 1 shared_informer.go:262] Caches are synced for PVC protection
I0725 19:48:00.241304 1 shared_informer.go:262] Caches are synced for ReplicaSet
I0725 19:48:00.241831 1 shared_informer.go:262] Caches are synced for crt configmap
I0725 19:48:00.267428 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I0725 19:48:00.267536 1 shared_informer.go:262] Caches are synced for endpoint_slice
I0725 19:48:00.352108 1 shared_informer.go:262] Caches are synced for taint
I0725 19:48:00.352228 1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone:
I0725 19:48:00.352235 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W0725 19:48:00.352273 1 node_lifecycle_controller.go:1014] Missing timestamp for Node pause-20220725124607-24757. Assuming now as a timestamp.
I0725 19:48:00.352298 1 node_lifecycle_controller.go:1215] Controller detected that zone is now in state Normal.
I0725 19:48:00.352364 1 event.go:294] "Event occurred" object="pause-20220725124607-24757" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220725124607-24757 event: Registered Node pause-20220725124607-24757 in Controller"
I0725 19:48:00.422924 1 shared_informer.go:262] Caches are synced for cronjob
I0725 19:48:00.436655 1 shared_informer.go:262] Caches are synced for resource quota
I0725 19:48:00.442616 1 shared_informer.go:262] Caches are synced for resource quota
I0725 19:48:00.452304 1 shared_informer.go:262] Caches are synced for TTL after finished
I0725 19:48:00.457658 1 shared_informer.go:262] Caches are synced for job
I0725 19:48:00.869039 1 shared_informer.go:262] Caches are synced for garbage collector
I0725 19:48:00.921143 1 shared_informer.go:262] Caches are synced for garbage collector
I0725 19:48:00.921159 1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [0dbd38c4ed4c] <==
* I0725 19:47:49.892726 1 node.go:163] Successfully retrieved node IP: 192.168.64.23
I0725 19:47:49.892777 1 server_others.go:138] "Detected node IP" address="192.168.64.23"
I0725 19:47:49.892794 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I0725 19:47:49.916321 1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0725 19:47:49.916354 1 server_others.go:206] "Using iptables Proxier"
I0725 19:47:49.916374 1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I0725 19:47:49.916822 1 server.go:661] "Version info" version="v1.24.2"
I0725 19:47:49.916849 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0725 19:47:49.917568 1 config.go:317] "Starting service config controller"
I0725 19:47:49.917598 1 shared_informer.go:255] Waiting for caches to sync for service config
I0725 19:47:49.917617 1 config.go:226] "Starting endpoint slice config controller"
I0725 19:47:49.917620 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0725 19:47:49.919327 1 config.go:444] "Starting node config controller"
I0725 19:47:49.919353 1 shared_informer.go:255] Waiting for caches to sync for node config
I0725 19:47:50.018684 1 shared_informer.go:262] Caches are synced for endpoint slice config
I0725 19:47:50.018771 1 shared_informer.go:262] Caches are synced for service config
I0725 19:47:50.019793 1 shared_informer.go:262] Caches are synced for node config
*
* ==> kube-proxy [148739a1c8bf] <==
* E0725 19:47:28.237731 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725124607-24757": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:29.292146 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-20220725124607-24757": dial tcp 192.168.64.23:8443: connect: connection refused
*
* ==> kube-scheduler [8abc60a3d366] <==
* W0725 19:47:30.218974 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.23:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.219370 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.23:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.366868 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.64.23:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.366924 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.23:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.407825 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.408146 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.419525 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.64.23:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.419754 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.23:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.433625 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.64.23:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.433671 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.23:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.609277 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://192.168.64.23:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.609319 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.23:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.628983 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.64.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.629024 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.660487 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.64.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.660554 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.728171 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.64.23:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.728211 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.64.23:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.744074 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.744102 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.64.23:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
W0725 19:47:30.749994 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.64.23:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
E0725 19:47:30.750011 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.23:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.23:8443: connect: connection refused
I0725 19:47:31.226628 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0725 19:47:31.226954 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
E0725 19:47:31.226972 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kube-scheduler [a51243e16034] <==
* W0725 19:47:47.630044 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0725 19:47:47.630073 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0725 19:47:47.630219 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0725 19:47:47.630288 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
W0725 19:47:47.630381 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.630409 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.630582 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.630695 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.630837 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0725 19:47:47.630865 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0725 19:47:47.631066 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0725 19:47:47.631095 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
W0725 19:47:47.631291 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0725 19:47:47.631320 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0725 19:47:47.631514 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0725 19:47:47.631543 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
W0725 19:47:47.633716 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0725 19:47:47.633745 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
W0725 19:47:47.633856 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0725 19:47:47.633996 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
W0725 19:47:47.633930 1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0725 19:47:47.634128 1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0725 19:47:47.638899 1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0725 19:47:47.638930 1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0725 19:47:49.315516 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Journal begins at Mon 2022-07-25 19:46:16 UTC, ends at Mon 2022-07-25 19:48:12 UTC. --
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.424330 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.525057 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: E0725 19:47:47.625805 4867 kubelet.go:2424] "Error getting node" err="node \"pause-20220725124607-24757\" not found"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.658326 4867 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220725124607-24757"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.658544 4867 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220725124607-24757"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.660223 4867 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Jul 25 19:47:47 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:47.661058 4867 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.641504 4867 apiserver.go:52] "Watching apiserver"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643024 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643091 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.643118 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.736645 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc6970ad-eca0-464d-a5c0-5eecee54875c-kube-proxy\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.736905 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc6970ad-eca0-464d-a5c0-5eecee54875c-xtables-lock\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737135 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swc2d\" (UniqueName: \"kubernetes.io/projected/cc6970ad-eca0-464d-a5c0-5eecee54875c-kube-api-access-swc2d\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737412 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4b7\" (UniqueName: \"kubernetes.io/projected/6b4a2096-027b-40d7-8f3f-f2e78d7f76c7-kube-api-access-4z4b7\") pod \"coredns-6d4b75cb6d-wnp4h\" (UID: \"6b4a2096-027b-40d7-8f3f-f2e78d7f76c7\") " pod="kube-system/coredns-6d4b75cb6d-wnp4h"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737631 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b4a2096-027b-40d7-8f3f-f2e78d7f76c7-config-volume\") pod \"coredns-6d4b75cb6d-wnp4h\" (UID: \"6b4a2096-027b-40d7-8f3f-f2e78d7f76c7\") " pod="kube-system/coredns-6d4b75cb6d-wnp4h"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.737898 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc6970ad-eca0-464d-a5c0-5eecee54875c-lib-modules\") pod \"kube-proxy-vvgjh\" (UID: \"cc6970ad-eca0-464d-a5c0-5eecee54875c\") " pod="kube-system/kube-proxy-vvgjh"
Jul 25 19:47:48 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:48.738046 4867 reconciler.go:157] "Reconciler: start to sync state"
Jul 25 19:47:51 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:51.199283 4867 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 25 19:47:51 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:51.754021 4867 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bfdceddb-f0ec-481c-a4a2-ce56bb133d27 path="/var/lib/kubelet/pods/bfdceddb-f0ec-481c-a4a2-ce56bb133d27/volumes"
Jul 25 19:47:59 pause-20220725124607-24757 kubelet[4867]: I0725 19:47:59.043070 4867 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.855095 4867 topology_manager.go:200] "Topology Admit Handler"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.935818 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7d189436-f57b-4db0-a2c3-534d702f468f-tmp\") pod \"storage-provisioner\" (UID: \"7d189436-f57b-4db0-a2c3-534d702f468f\") " pod="kube-system/storage-provisioner"
Jul 25 19:48:00 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:00.936010 4867 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tbpr\" (UniqueName: \"kubernetes.io/projected/7d189436-f57b-4db0-a2c3-534d702f468f-kube-api-access-5tbpr\") pod \"storage-provisioner\" (UID: \"7d189436-f57b-4db0-a2c3-534d702f468f\") " pod="kube-system/storage-provisioner"
Jul 25 19:48:01 pause-20220725124607-24757 kubelet[4867]: I0725 19:48:01.459786 4867 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="75583aa7957ba5b117c984936a2c407dab53b4eac952fb60df2da647aab86e92"
*
* ==> storage-provisioner [f4d3b9b8fc44] <==
* I0725 19:48:01.570677 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0725 19:48:01.581569 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0725 19:48:01.581924 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0725 19:48:01.593967 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0725 19:48:01.594237 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2!
I0725 19:48:01.595287 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"608c730e-6eca-4f99-a3f3-38ad329fea2b", APIVersion:"v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2 became leader
I0725 19:48:01.694800 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220725124607-24757_b4ae8073-0557-44c9-82ea-8620c21314c2!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220725124607-24757 -n pause-20220725124607-24757
helpers_test.go:261: (dbg) Run: kubectl --context pause-20220725124607-24757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-20220725124607-24757 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220725124607-24757 describe pod : exit status 1 (37.676233ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-20220725124607-24757 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.02s)