=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-windows-amd64.exe start -p pause-073300 --alsologtostderr -v=1 --driver=docker
E0315 21:14:48.849913 8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-919600\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-073300 --alsologtostderr -v=1 --driver=docker: (1m53.1374961s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-073300] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=16056
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on existing profile
* Starting control plane node pause-073300 in cluster pause-073300
* Pulling base image ...
* Updating the running docker "pause-073300" container ...
* Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Enabled addons:
* Verifying Kubernetes components...
- Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
-- /stdout --
** stderr **
I0315 21:14:40.505339 1332 out.go:296] Setting OutFile to fd 1676 ...
I0315 21:14:40.620890 1332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:14:40.620890 1332 out.go:309] Setting ErrFile to fd 1712...
I0315 21:14:40.620966 1332 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:14:40.659650 1332 out.go:303] Setting JSON to false
I0315 21:14:40.665175 1332 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24283,"bootTime":1678890597,"procs":157,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
W0315 21:14:40.665175 1332 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0315 21:14:40.674659 1332 out.go:177] * [pause-073300] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
I0315 21:14:40.679461 1332 notify.go:220] Checking for updates...
I0315 21:14:40.682314 1332 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:14:40.688587 1332 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0315 21:14:40.695397 1332 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
I0315 21:14:40.697893 1332 out.go:177] - MINIKUBE_LOCATION=16056
I0315 21:14:40.700645 1332 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0315 21:14:40.705148 1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:14:40.707350 1332 driver.go:365] Setting default libvirt URI to qemu:///system
I0315 21:14:41.257936 1332 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
I0315 21:14:41.274456 1332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:14:42.518964 1332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2443346s)
I0315 21:14:42.520258 1332 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:104 OomKillDisable:true NGoroutines:81 SystemTime:2023-03-15 21:14:41.5899542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccom
p,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-pl
ugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:14:42.524614 1332 out.go:177] * Using the docker driver based on existing profile
I0315 21:14:42.527717 1332 start.go:296] selected driver: docker
I0315 21:14:42.527717 1332 start.go:857] validating driver "docker" against &{Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage
-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:14:42.527717 1332 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0315 21:14:42.561390 1332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:14:43.732580 1332 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.1694367s)
I0315 21:14:43.732798 1332 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:102 OomKillDisable:true NGoroutines:79 SystemTime:2023-03-15 21:14:42.8469155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccom
p,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-pl
ugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:14:43.829162 1332 cni.go:84] Creating CNI manager for ""
I0315 21:14:43.829305 1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:14:43.829305 1332 start_flags.go:319] config:
{Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] Cust
omAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:14:43.833646 1332 out.go:177] * Starting control plane node pause-073300 in cluster pause-073300
I0315 21:14:43.836862 1332 cache.go:120] Beginning downloading kic base image for docker with docker
I0315 21:14:43.839542 1332 out.go:177] * Pulling base image ...
I0315 21:14:43.842733 1332 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:14:43.842733 1332 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
I0315 21:14:43.842993 1332 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0315 21:14:43.842993 1332 cache.go:57] Caching tarball of preloaded images
I0315 21:14:43.843643 1332 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0315 21:14:43.843643 1332 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0315 21:14:43.844172 1332 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\config.json ...
I0315 21:14:44.192908 1332 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
I0315 21:14:44.192908 1332 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
I0315 21:14:44.192908 1332 cache.go:193] Successfully downloaded all kic artifacts
I0315 21:14:44.192908 1332 start.go:364] acquiring machines lock for pause-073300: {Name:mkb8165e31048686f4d7bcff493eb42dbfcbb659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0315 21:14:44.192908 1332 start.go:368] acquired machines lock for "pause-073300" in 0s
I0315 21:14:44.192908 1332 start.go:96] Skipping create...Using existing machine configuration
I0315 21:14:44.192908 1332 fix.go:55] fixHost starting:
I0315 21:14:44.236201 1332 cli_runner.go:164] Run: docker container inspect pause-073300 --format={{.State.Status}}
I0315 21:14:44.651203 1332 fix.go:103] recreateIfNeeded on pause-073300: state=Running err=<nil>
W0315 21:14:44.651203 1332 fix.go:129] unexpected machine state, will restart: <nil>
I0315 21:14:44.655921 1332 out.go:177] * Updating the running docker "pause-073300" container ...
I0315 21:14:44.659043 1332 machine.go:88] provisioning docker machine ...
I0315 21:14:44.659402 1332 ubuntu.go:169] provisioning hostname "pause-073300"
I0315 21:14:44.679272 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:45.061246 1332 main.go:141] libmachine: Using SSH client type: native
I0315 21:14:45.063308 1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65160 <nil> <nil>}
I0315 21:14:45.063308 1332 main.go:141] libmachine: About to run SSH command:
sudo hostname pause-073300 && echo "pause-073300" | sudo tee /etc/hostname
I0315 21:14:45.531996 1332 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-073300
I0315 21:14:45.550944 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:45.983958 1332 main.go:141] libmachine: Using SSH client type: native
I0315 21:14:45.985679 1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65160 <nil> <nil>}
I0315 21:14:45.985775 1332 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-073300' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-073300/g' /etc/hosts;
else
echo '127.0.1.1 pause-073300' | sudo tee -a /etc/hosts;
fi
fi
I0315 21:14:46.354582 1332 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0315 21:14:46.354764 1332 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0315 21:14:46.354764 1332 ubuntu.go:177] setting up certificates
I0315 21:14:46.354764 1332 provision.go:83] configureAuth start
I0315 21:14:46.373654 1332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-073300
I0315 21:14:46.752009 1332 provision.go:138] copyHostCerts
I0315 21:14:46.753909 1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0315 21:14:46.753909 1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0315 21:14:46.756980 1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0315 21:14:46.760942 1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0315 21:14:46.760942 1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0315 21:14:46.760942 1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0315 21:14:46.763297 1332 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0315 21:14:46.763297 1332 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0315 21:14:46.763953 1332 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0315 21:14:46.765251 1332 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-073300 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube pause-073300]
I0315 21:14:47.103137 1332 provision.go:172] copyRemoteCerts
I0315 21:14:47.122955 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0315 21:14:47.142800 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:47.546205 1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
I0315 21:14:47.789045 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
I0315 21:14:47.921672 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0315 21:14:48.109386 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0315 21:14:48.239020 1332 provision.go:86] duration metric: configureAuth took 1.8842594s
I0315 21:14:48.239020 1332 ubuntu.go:193] setting minikube options for container-runtime
I0315 21:14:48.240260 1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:14:48.259529 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:48.656348 1332 main.go:141] libmachine: Using SSH client type: native
I0315 21:14:48.657193 1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65160 <nil> <nil>}
I0315 21:14:48.657193 1332 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0315 21:14:48.987281 1332 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0315 21:14:48.987326 1332 ubuntu.go:71] root file system type: overlay
I0315 21:14:48.987574 1332 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0315 21:14:49.002065 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:49.395383 1332 main.go:141] libmachine: Using SSH client type: native
I0315 21:14:49.397305 1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65160 <nil> <nil>}
I0315 21:14:49.397536 1332 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0315 21:14:49.831693 1332 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0315 21:14:49.850458 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:50.237476 1332 main.go:141] libmachine: Using SSH client type: native
I0315 21:14:50.239498 1332 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65160 <nil> <nil>}
I0315 21:14:50.239498 1332 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0315 21:14:50.598885 1332 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0315 21:14:50.598941 1332 machine.go:91] provisioned docker machine in 5.9396163s
I0315 21:14:50.598941 1332 start.go:300] post-start starting for "pause-073300" (driver="docker")
I0315 21:14:50.599011 1332 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0315 21:14:50.622720 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0315 21:14:50.640775 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:51.043106 1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
I0315 21:14:51.292852 1332 ssh_runner.go:195] Run: cat /etc/os-release
I0315 21:14:51.314432 1332 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0315 21:14:51.314968 1332 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0315 21:14:51.315037 1332 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0315 21:14:51.315037 1332 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0315 21:14:51.315096 1332 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
I0315 21:14:51.315677 1332 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
I0315 21:14:51.316716 1332 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
I0315 21:14:51.340291 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0315 21:14:51.396575 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
I0315 21:14:51.535578 1332 start.go:303] post-start completed in 936.5687ms
I0315 21:14:51.567809 1332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0315 21:14:51.587272 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:51.961977 1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
I0315 21:14:52.188251 1332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0315 21:14:52.212356 1332 fix.go:57] fixHost completed within 8.0194642s
I0315 21:14:52.212356 1332 start.go:83] releasing machines lock for "pause-073300", held for 8.0194642s
I0315 21:14:52.225798 1332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-073300
I0315 21:14:52.626752 1332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0315 21:14:52.642874 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:52.648822 1332 ssh_runner.go:195] Run: cat /version.json
I0315 21:14:52.667173 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-073300
I0315 21:14:53.024855 1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
I0315 21:14:53.056225 1332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65160 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-073300\id_rsa Username:docker}
I0315 21:14:53.415628 1332 ssh_runner.go:195] Run: systemctl --version
I0315 21:14:53.456301 1332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0315 21:14:53.496589 1332 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
W0315 21:14:53.535844 1332 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
stdout:
stderr:
find: '\\etc\\cni\\net.d': No such file or directory
I0315 21:14:53.554811 1332 ssh_runner.go:195] Run: which cri-dockerd
I0315 21:14:53.594396 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0315 21:14:53.630699 1332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0315 21:14:53.714760 1332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0315 21:14:53.747510 1332 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0315 21:14:53.747558 1332 start.go:485] detecting cgroup driver to use...
I0315 21:14:53.747624 1332 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:14:53.747780 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:14:53.826353 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0315 21:14:53.897205 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0315 21:14:53.940137 1332 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0315 21:14:53.959192 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0315 21:14:54.008505 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:14:54.085644 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0315 21:14:54.174515 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:14:54.227887 1332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0315 21:14:54.296922 1332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0315 21:14:54.377501 1332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0315 21:14:54.430855 1332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0315 21:14:54.493740 1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:14:55.075764 1332 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0315 21:15:01.331130 1332 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (6.2553782s)
I0315 21:15:01.331250 1332 start.go:485] detecting cgroup driver to use...
I0315 21:15:01.331319 1332 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:15:01.349148 1332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0315 21:15:01.444720 1332 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0315 21:15:01.465261 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0315 21:15:01.534382 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:15:01.611076 1332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0315 21:15:01.898149 1332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0315 21:15:02.288028 1332 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0315 21:15:02.288028 1332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0315 21:15:02.389281 1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:15:02.871729 1332 ssh_runner.go:195] Run: sudo systemctl restart docker
I0315 21:15:17.505642 1332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (14.6333883s)
I0315 21:15:17.522785 1332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:15:18.221285 1332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0315 21:15:18.541325 1332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:15:18.966238 1332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:15:19.243564 1332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0315 21:15:19.304622 1332 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0315 21:15:19.321224 1332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0315 21:15:19.349813 1332 start.go:553] Will wait 60s for crictl version
I0315 21:15:19.372859 1332 ssh_runner.go:195] Run: which crictl
I0315 21:15:19.411386 1332 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0315 21:15:19.725908 1332 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 23.0.1
RuntimeApiVersion: v1alpha2
I0315 21:15:19.746674 1332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:15:19.845681 1332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:15:20.161499 1332 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
I0315 21:15:20.172032 1332 cli_runner.go:164] Run: docker exec -t pause-073300 dig +short host.docker.internal
I0315 21:15:20.783625 1332 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0315 21:15:20.810526 1332 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0315 21:15:20.863571 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
I0315 21:15:21.186181 1332 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:15:21.197794 1332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:15:21.340421 1332 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:15:21.340421 1332 docker.go:560] Images already preloaded, skipping extraction
I0315 21:15:21.358806 1332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:15:21.530676 1332 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
registry.k8s.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:15:21.530902 1332 cache_images.go:84] Images are preloaded, skipping loading
I0315 21:15:21.547617 1332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0315 21:15:21.733812 1332 cni.go:84] Creating CNI manager for ""
I0315 21:15:21.733925 1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:15:21.733972 1332 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 21:15:21.734031 1332 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-073300 NodeName:pause-073300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 21:15:21.734518 1332 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.103.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-073300"
kubeletExtraArgs:
node-ip: 192.168.103.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 21:15:21.734768 1332 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-073300 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 21:15:21.756840 1332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0315 21:15:21.947537 1332 binaries.go:44] Found k8s binaries, skipping transfer
I0315 21:15:21.972855 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 21:15:22.136836 1332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (445 bytes)
I0315 21:15:22.443146 1332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 21:15:22.853268 1332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2091 bytes)
I0315 21:15:23.170849 1332 ssh_runner.go:195] Run: grep 192.168.103.2 control-plane.minikube.internal$ /etc/hosts
I0315 21:15:23.236998 1332 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300 for IP: 192.168.103.2
I0315 21:15:23.237123 1332 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:23.237580 1332 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
I0315 21:15:23.237580 1332 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
I0315 21:15:23.239738 1332 certs.go:311] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\client.key
I0315 21:15:23.240379 1332 certs.go:311] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.key.33fce0b9
I0315 21:15:23.240672 1332 certs.go:311] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.key
I0315 21:15:23.243463 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
W0315 21:15:23.243463 1332 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
I0315 21:15:23.244058 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0315 21:15:23.244358 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0315 21:15:23.244739 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0315 21:15:23.244739 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0315 21:15:23.245646 1332 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
I0315 21:15:23.247378 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 21:15:23.378538 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0315 21:15:23.450108 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 21:15:23.516385 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-073300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0315 21:15:23.582912 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 21:15:23.682413 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0315 21:15:23.783573 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 21:15:23.868684 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0315 21:15:23.983032 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 21:15:24.065791 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
I0315 21:15:24.153574 1332 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
I0315 21:15:24.274466 1332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 21:15:24.353599 1332 ssh_runner.go:195] Run: openssl version
I0315 21:15:24.408119 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
I0315 21:15:24.468740 1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
I0315 21:15:24.503273 1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
I0315 21:15:24.523667 1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
I0315 21:15:24.585137 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
I0315 21:15:24.626254 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 21:15:24.682792 1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:24.703174 1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:24.715257 1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:24.756366 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0315 21:15:24.813357 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
I0315 21:15:24.856264 1332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
I0315 21:15:24.883599 1332 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
I0315 21:15:24.896180 1332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
I0315 21:15:24.938726 1332 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
I0315 21:15:24.973171 1332 kubeadm.go:401] StartCluster: {Name:pause-073300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:pause-073300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:15:24.983466 1332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:25.051270 1332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0315 21:15:25.086855 1332 kubeadm.go:416] found existing configuration files, will attempt cluster restart
I0315 21:15:25.086855 1332 kubeadm.go:633] restartCluster start
I0315 21:15:25.100452 1332 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0315 21:15:25.132370 1332 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0315 21:15:25.140295 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
I0315 21:15:25.398909 1332 kubeconfig.go:92] found "pause-073300" server: "https://127.0.0.1:65165"
I0315 21:15:25.401452 1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 21:15:25.410936 1332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0315 21:15:25.444192 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:25.456631 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:25.492162 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:25.993789 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.001803 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.031907 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.498885 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.520413 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.747568 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.998577 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.005491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.038573 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:27.494449 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.510680 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.648998 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.001209 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.016866 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.252092 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.497926 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.519187 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.938518 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.005873 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.022195 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:29.437505 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.498878 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.509169 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:15:29.790027 1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
I0315 21:15:30.138061 1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
I0315 21:15:30.167896 1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
I0315 21:15:30.342651 1332 api_server.go:203] freezer state: "THAWED"
I0315 21:15:30.342651 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.356716 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.356862 1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
I0315 21:15:30.665183 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.674974 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.675152 1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
I0315 21:15:31.004595 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:36.012455 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:36.012558 1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
I0315 21:15:36.332781 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:41.339223 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:41.339409 1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
I0315 21:15:41.739620 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.046130 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.046265 1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.784826 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.930024 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:1120] stopping kube-system containers ...
I0315 21:15:44.948612 1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:45.444192 1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
I0315 21:15:45.468741 1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
I0315 21:15:55.263337 1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
I0315 21:15:55.280007 1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0315 21:15:55.667015 1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:15:55.884955 1332 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
I0315 21:15:55.906317 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0315 21:15:55.970490 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0315 21:15:56.077831 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0315 21:15:56.164837 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.189369 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0315 21:15:56.278633 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0315 21:15:56.350783 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.368651 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0315 21:15:56.472488 1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554151 1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554288 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:56.838520 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:58.821631 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
I0315 21:15:58.821631 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.241679 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.531884 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.837145 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:15:59.862394 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:00.562737 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.047261 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.561853 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.057572 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.554491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.060987 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.560744 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.058096 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.574094 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.054883 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.558867 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.064030 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.559451 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.836193 1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
I0315 21:16:06.836348 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:06.836472 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:06.844702 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.349930 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:07.360047 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.852770 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:12.856341 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:16:13.355164 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.531052 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0315 21:16:13.531052 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:16:13.856894 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.948093 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:13.948207 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.353756 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.444021 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.444582 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.850032 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.881729 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.881822 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.359619 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.458273 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:15.458359 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.846895 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.875897 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:15.909269 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:15.909297 1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
I0315 21:16:15.909353 1332 cni.go:84] Creating CNI manager for ""
I0315 21:16:15.909353 1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:15.912744 1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 21:16:15.925415 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 21:16:15.965847 1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 21:16:16.079955 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:16.096342 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:16.096342 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0315 21:16:16.096342 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0315 21:16:16.096342 1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
I0315 21:16:16.096342 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:16.105140 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:16.105226 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:16.105269 1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
I0315 21:16:16.105316 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:16:17.333440 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
I0315 21:16:17.333615 1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0315 21:16:17.354686 1332 kubeadm.go:784] kubelet initialised
I0315 21:16:17.354754 1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
I0315 21:16:17.354822 1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:17.435085 1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:19.521467 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:22.006711 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:24.016700 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:26.048179 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:28.050667 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:29.001447 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.001447 1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.001447 1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.028330 1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.057628 1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.092004 1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131434 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.131486 1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131486 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402295 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.402345 1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402345 1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:29.402386 1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:16:29.426130 1332 ops.go:34] apiserver oom_adj: -16
I0315 21:16:29.426187 1332 kubeadm.go:637] restartCluster took 1m4.338895s
I0315 21:16:29.426266 1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
I0315 21:16:29.426351 1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.426601 1332 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:16:29.429857 1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.432982 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:16:29.432982 1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:16:29.433680 1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:29.438415 1332 out.go:177] * Enabled addons:
I0315 21:16:29.443738 1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
I0315 21:16:29.452842 1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 21:16:29.467764 1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
I0315 21:16:29.467764 1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:16:29.470858 1332 out.go:177] * Verifying Kubernetes components...
I0315 21:16:29.484573 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:29.761590 1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0315 21:16:29.775423 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
I0315 21:16:30.117208 1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
I0315 21:16:30.134817 1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
I0315 21:16:30.134886 1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
I0315 21:16:30.135066 1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:30.162562 1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219441 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.219583 1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219583 1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608418 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.608458 1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608458 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.017074 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.017074 1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.017074 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.395349 1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.792495 1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.219569 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:32.220120 1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.220120 1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:32.220120 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:16:32.232971 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:32.332638 1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
I0315 21:16:32.332638 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:32.332638 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:32.362918 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:32.430820 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:32.430820 1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
I0315 21:16:32.430820 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:32.455349 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:32.455486 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.455486 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.455544 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.455685 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.455785 1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
I0315 21:16:32.455785 1332 default_sa.go:34] waiting for default service account to be created ...
I0315 21:16:32.637154 1332 default_sa.go:45] found service account: "default"
I0315 21:16:32.637301 1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
I0315 21:16:32.637301 1332 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_pods.go:86] 6 kube-system pods found
I0315 21:16:32.844031 1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.844031 1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_svc.go:44] waiting for kubelet service to be running ....
I0315 21:16:32.858698 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:32.902525 1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
I0315 21:16:32.902598 1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0315 21:16:32.902669 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:33.016156 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:33.016241 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:33.016278 1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
I0315 21:16:33.016316 1332 start.go:228] waiting for startup goroutines ...
I0315 21:16:33.016316 1332 start.go:233] waiting for cluster config update ...
I0315 21:16:33.016351 1332 start.go:242] writing updated cluster config ...
I0315 21:16:33.039378 1332 ssh_runner.go:195] Run: rm -f paused
I0315 21:16:33.289071 1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
I0315 21:16:33.292949 1332 out.go:177]
W0315 21:16:33.295479 1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
I0315 21:16:33.297706 1332 out.go:177] - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
I0315 21:16:33.301501 1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-073300
helpers_test.go:235: (dbg) docker inspect pause-073300:
-- stdout --
[
{
"Id": "8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af",
"Created": "2023-03-15T21:12:57.6447279Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 235036,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-03-15T21:13:02.5343301Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c2228ee73b919fe6986a8848f936a81a268f0e56f65fc402964f596a1336d16b",
"ResolvConfPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hostname",
"HostsPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hosts",
"LogPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af-json.log",
"Name": "/pause-073300",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-073300:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-073300",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 2147483648,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b-init/diff:/var/lib/docker/overlay2/dd4a105805e89f3781ba34ad53d0a86096f0b864f9eade98210c90b3db11e614/diff:/var/lib/docker/overlay2/85f05c8966ab20f24eea0cadf9b702a2755c1a700aee4fcacd3754b8fa7f8a91/diff:/var/lib/docker/overlay2/b2c60f67ad52427067a519010db687573f6b5b01526e9e9493d88bbb3dcaf069/diff:/var/lib/docker/overlay2/ca870ef465e163b19b7e0ef24b89c201cc7cfe12753a6ca6a515827067e4fc98/diff:/var/lib/docker/overlay2/f55801eccf5ae4ff6206eaaaca361e1d9bfadc5759172bb8072e835b0002419b/diff:/var/lib/docker/overlay2/3da247e6db7b0c502d6067a49cfb704f596cd5fe9a3a874f6888ae9cc2373233/diff:/var/lib/docker/overlay2/f0dcb6d169a751860b7c097c666afe3d8fba3aac20d90e95b7f85913b7d1fda7/diff:/var/lib/docker/overlay2/a0c906b3378b625d84a7a2d043cc982545599c488b72767e2b4822211ddee871/diff:/var/lib/docker/overlay2/1380f7e23737bb69bab3e1c3b37fff4a603a1096ba1e984f2808fdb9fc5664b7/diff:/var/lib/docker/overlay2/f09380
dffb1afe5e97599b999b6d05a1d0b97490fc3afb897018955e3589ddf0/diff:/var/lib/docker/overlay2/12504a4aab3b43a1624555c565265eb2a252f3cc64b5942527ead795f1b46742/diff:/var/lib/docker/overlay2/2f17a40545e098dc56e6667d78dfde761f9ae57ff4c2dcab77a6135abc29f050/diff:/var/lib/docker/overlay2/378841db26151d8a66f60032a9366d4572aeb0fd0db1c1af9429abf5d7b6ab82/diff:/var/lib/docker/overlay2/14ee7241acf63b7e56e700bccdbcc29bd6530ebd357799238641498ccb978bc1/diff:/var/lib/docker/overlay2/0e384b8276413ac21818038eacaf3da54a8ac43c6ccef737b2c4e70e568fe287/diff:/var/lib/docker/overlay2/66beff05ea52aebfaea737c44ff3da16f742e7e2577ccea2c1fe954085a1e7f4/diff:/var/lib/docker/overlay2/fe7b0a2c7d3f1889e322a156881a5066e5e784dc1888fbf172b4beada499c14a/diff:/var/lib/docker/overlay2/bf3118300571672a5d3b839bbbbaa42516c05f16305f5b944d88d38687857207/diff:/var/lib/docker/overlay2/d1326cf983418efce550556b370f71d9b4d9e6671a9267ea6433967dcafff129/diff:/var/lib/docker/overlay2/cc4d1369146bbaac53f23e5cb8e072c195a8c109396c1f305d9a90dbcb491d62/diff:/var/lib/d
ocker/overlay2/20a6a00f4e15b51632a8a26911faf3243318c3e7bd9266fe9c926ca6070526a8/diff:/var/lib/docker/overlay2/6a6bfa0be9e2c1a0aa9fa555897c7f62f7c23b782a2117560731f10b833692a0/diff:/var/lib/docker/overlay2/0d9ed53179f81c8d2e276195863f6ac1ba99be69a7217caa97c19fe1121b0d38/diff:/var/lib/docker/overlay2/f9e70916967de3d00f48ca66d15ec3af34bd3980334b7ecb8950be0a5aee2e5e/diff:/var/lib/docker/overlay2/8a3ebe53f0b355704a58efda53f1dcf8ae0099f0a7947c748e7c447044baed05/diff:/var/lib/docker/overlay2/f6841f5c7deb52ba587f1365fd0bc48fe4334bd9678f4846740d9e4f3df386c4/diff:/var/lib/docker/overlay2/7729eb6c4bb6c79eae923e1946b180dcdb33aa85c259a8a21b46994e681a329f/diff:/var/lib/docker/overlay2/86ccbe980393e3c2dea4faf1f5b45fa86ac8f47190cf4fb3ebb23d5fd6687d44/diff:/var/lib/docker/overlay2/48b28921897a52ef79e37091b3d3df88fa4e01604e3a63d7e3dbbd72e551797c/diff:/var/lib/docker/overlay2/b9f9c70e4945260452936930e508cb1e7d619927da4230c7b792e5908a93ec46/diff:/var/lib/docker/overlay2/39f84637efc722da57b6de997d757e8709af3d48f8cba3da8848d3674aa
7ba4d/diff:/var/lib/docker/overlay2/9d81ba80e5128eb395bcffc7b56889c3d18172c222e637671a4b3c12c0a72afd/diff:/var/lib/docker/overlay2/03583facbdd50e79e467eb534dfcbe3d5e47aef4b25195138b3c0134ebd7f07e/diff:/var/lib/docker/overlay2/38e991cef8fb39c883da64e57775232fd1df5a4c67f32565e747b7363f336632/diff:/var/lib/docker/overlay2/0e0ebf6f489a93585842ec4fef7d044da67fd8a9504f91fe03cc03c6928134b8/diff:/var/lib/docker/overlay2/dedec87bbba9e6a1a68a159c167cac4c10a25918fa3d00630d6570db2ca290eb/diff:/var/lib/docker/overlay2/dc09130400d9f44a28862a6484b44433985893e9a8f49df62c38c0bd6b5e4e2c/diff:/var/lib/docker/overlay2/f00d229f6d9f2960571b2e1c365f30bd680b686c0d4569b5190c072a626c6811/diff:/var/lib/docker/overlay2/1a9993f098965bbd60b6e43b5998e4fcae02f81d65cc863bd8f6e29f7e2b8426/diff:/var/lib/docker/overlay2/500f950cf1835311103c129d3c1487e8e6b917ad928788ee14527cd8342c544f/diff:/var/lib/docker/overlay2/018feb310d5aa53cd6175c82f8ca56d22b3c1ad26ae5cfda5f6e3b56ca3919e6/diff:/var/lib/docker/overlay2/f84198610374e88e1ba6917bf70c8d9cea6ede
68b5fb4852c7eebcb536a12a83/diff",
"MergedDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/merged",
"UpperDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/diff",
"WorkDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-073300",
"Source": "/var/lib/docker/volumes/pause-073300/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-073300",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-073300",
"name.minikube.sigs.k8s.io": "pause-073300",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "20.04",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c465f6b5b8ea2cbabcd582f953a2ee6755ba6c0b6db6fbc3b931a291aafae975",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65160"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65161"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65163"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65164"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65165"
}
]
},
"SandboxKey": "/var/run/docker/netns/c465f6b5b8ea",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-073300": {
"IPAMConfig": {
"IPv4Address": "192.168.103.2"
},
"Links": null,
"Aliases": [
"8be68eee5af2",
"pause-073300"
],
"NetworkID": "e97288cdb8ed8d3c843be70e49117f727e8c88772310c60f193237b2f3d2167f",
"EndpointID": "7dff20190b061cfe2a0b46f43c2f9a085fd94900413646e6b074cab27b5ac50e",
"Gateway": "192.168.103.1",
"IPAddress": "192.168.103.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:67:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300: (2.0626046s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25: (3.6909868s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | cri-dockerd --version | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-899600 sudo find | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-899600 sudo crio | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | config | | | | | |
| delete | -p cilium-899600 | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| start | -p force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:15 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=docker | | | | | |
| ssh | cert-options-298900 ssh | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-298900 -- sudo | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-298900 | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| delete | -p cert-expiration-023900 | cert-expiration-023900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| start | -p old-k8s-version-103800 | old-k8s-version-103800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| start | -p no-preload-470000 | no-preload-470000 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --kubernetes-version=v1.26.2 | | | | | |
| start | -p pause-073300 | pause-073300 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:14 UTC | 15 Mar 23 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=docker | | | | | |
| ssh | force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
| start | -p embed-certs-348900 | embed-certs-348900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --kubernetes-version=v1.26.2 | | | | | |
|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/15 21:15:28
Running on machine: minikube1
Binary: Built with gc go1.20.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0315 21:15:28.142992 11164 out.go:296] Setting OutFile to fd 1840 ...
I0315 21:15:28.223401 11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:15:28.223401 11164 out.go:309] Setting ErrFile to fd 1952...
I0315 21:15:28.223401 11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:15:28.262334 11164 out.go:303] Setting JSON to false
I0315 21:15:28.267297 11164 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24330,"bootTime":1678890597,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
W0315 21:15:28.269446 11164 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0315 21:15:28.271110 11164 out.go:177] * [embed-certs-348900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
I0315 21:15:28.276466 11164 notify.go:220] Checking for updates...
I0315 21:15:28.279987 11164 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:15:28.284307 11164 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0315 21:15:28.287394 11164 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
I0315 21:15:28.289437 11164 out.go:177] - MINIKUBE_LOCATION=16056
I0315 21:15:28.293408 11164 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0315 21:15:27.107652 3304 kubeadm.go:322] [apiclient] All control plane components are healthy after 22.564526 seconds
I0315 21:15:27.107905 3304 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0315 21:15:27.174450 3304 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I0315 21:15:27.850318 3304 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0315 21:15:27.850318 3304 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-103800 as control-plane by adding the label "node-role.kubernetes.io/master=''"
I0315 21:15:28.451439 3304 kubeadm.go:322] [bootstrap-token] Using token: 1vsykl.s1ca43i7aq3le3xp
I0315 21:15:28.454827 3304 out.go:204] - Configuring RBAC rules ...
I0315 21:15:28.455102 3304 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0315 21:15:28.540595 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0315 21:15:28.708614 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0315 21:15:28.750604 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0315 21:15:28.768374 3304 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0315 21:15:28.296206 11164 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:15:28.296901 11164 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0315 21:15:28.296901 11164 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:15:28.297434 11164 driver.go:365] Setting default libvirt URI to qemu:///system
I0315 21:15:28.716358 11164 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
I0315 21:15:28.733128 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:15:30.097371 11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3641858s)
I0315 21:15:30.098315 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:28.9739466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:15:30.101949 11164 out.go:177] * Using the docker driver based on user configuration
I0315 21:15:25.993789 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.001803 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.031907 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.498885 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.520413 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.747568 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.998577 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.005491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.038573 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:27.494449 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.510680 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.648998 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.001209 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.016866 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.252092 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.497926 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.519187 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.938518 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.005873 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.022195 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:29.437505 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.498878 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.509169 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:15:29.790027 1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
I0315 21:15:30.138061 1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
I0315 21:15:30.167896 1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
I0315 21:15:30.342651 1332 api_server.go:203] freezer state: "THAWED"
I0315 21:15:30.342651 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.356716 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.356862 1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
I0315 21:15:28.433538 4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (19.494237s)
I0315 21:15:28.433538 4576 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.6-0 from cache
I0315 21:15:28.433538 4576 cache_images.go:123] Successfully loaded all cached images
I0315 21:15:28.434115 4576 cache_images.go:92] LoadImages completed in 1m0.6105675s
I0315 21:15:28.453600 4576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0315 21:15:28.577481 4576 cni.go:84] Creating CNI manager for ""
I0315 21:15:28.577553 4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:15:28.577553 4576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 21:15:28.577617 4576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-470000 NodeName:no-preload-470000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 21:15:28.577869 4576 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "no-preload-470000"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 21:15:28.577869 4576 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-470000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 21:15:28.591514 4576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0315 21:15:28.640201 4576 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.26.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.26.2': No such file or directory
Initiating transfer...
I0315 21:15:28.658006 4576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.26.2
I0315 21:15:28.718165 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl
I0315 21:15:28.718374 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm
I0315 21:15:28.718374 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet
I0315 21:15:30.110051 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm
I0315 21:15:30.131361 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubeadm': No such file or directory
I0315 21:15:30.131361 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm --> /var/lib/minikube/binaries/v1.26.2/kubeadm (46768128 bytes)
I0315 21:15:30.168761 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl
I0315 21:15:30.671927 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubectl': No such file or directory
I0315 21:15:30.672203 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl --> /var/lib/minikube/binaries/v1.26.2/kubectl (48029696 bytes)
I0315 21:15:30.105668 11164 start.go:296] selected driver: docker
I0315 21:15:30.105668 11164 start.go:857] validating driver "docker" against <nil>
I0315 21:15:30.105668 11164 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0315 21:15:30.254283 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:15:31.493680 11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2393516s)
I0315 21:15:31.494207 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:30.5680929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:15:31.494635 11164 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0315 21:15:31.496393 11164 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0315 21:15:31.499064 11164 out.go:177] * Using Docker Desktop driver with root privileges
I0315 21:15:31.501160 11164 cni.go:84] Creating CNI manager for ""
I0315 21:15:31.501160 11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:15:31.501160 11164 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0315 21:15:31.501160 11164 start_flags.go:319] config:
{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:15:31.504086 11164 out.go:177] * Starting control plane node embed-certs-348900 in cluster embed-certs-348900
I0315 21:15:31.506766 11164 cache.go:120] Beginning downloading kic base image for docker with docker
I0315 21:15:31.510102 11164 out.go:177] * Pulling base image ...
I0315 21:15:31.512871 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:15:31.512871 11164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
I0315 21:15:31.513118 11164 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0315 21:15:31.513179 11164 cache.go:57] Caching tarball of preloaded images
I0315 21:15:31.513395 11164 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0315 21:15:31.513395 11164 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0315 21:15:31.514113 11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
I0315 21:15:31.514113 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json: {Name:mk3060d08febbde2429fe9a2baf8bbeb029a2640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:31.875381 11164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
I0315 21:15:31.875429 11164 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
I0315 21:15:31.875429 11164 cache.go:193] Successfully downloaded all kic artifacts
I0315 21:15:31.875429 11164 start.go:364] acquiring machines lock for embed-certs-348900: {Name:mk2351699223ac71a23a94063928109d9d9f576a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0315 21:15:31.875429 11164 start.go:368] acquired machines lock for "embed-certs-348900" in 0s
I0315 21:15:31.876003 11164 start.go:93] Provisioning new machine with config: &{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:15:31.876319 11164 start.go:125] createHost starting for "" (driver="docker")
I0315 21:15:31.880060 11164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0315 21:15:31.880999 11164 start.go:159] libmachine.API.Create for "embed-certs-348900" (driver="docker")
I0315 21:15:31.881063 11164 client.go:168] LocalClient.Create starting
I0315 21:15:31.881279 11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
I0315 21:15:31.881815 11164 main.go:141] libmachine: Decoding PEM data...
I0315 21:15:31.881932 11164 main.go:141] libmachine: Parsing certificate...
I0315 21:15:31.881975 11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
I0315 21:15:31.881975 11164 main.go:141] libmachine: Decoding PEM data...
I0315 21:15:31.881975 11164 main.go:141] libmachine: Parsing certificate...
I0315 21:15:31.896077 11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0315 21:15:32.230585 11164 cli_runner.go:211] docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0315 21:15:32.246557 11164 network_create.go:281] running [docker network inspect embed-certs-348900] to gather additional debugging logs...
I0315 21:15:32.246658 11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900
W0315 21:15:32.585407 11164 cli_runner.go:211] docker network inspect embed-certs-348900 returned with exit code 1
I0315 21:15:32.585485 11164 network_create.go:284] error running [docker network inspect embed-certs-348900]: docker network inspect embed-certs-348900: exit status 1
stdout:
[]
stderr:
Error: No such network: embed-certs-348900
I0315 21:15:32.585531 11164 network_create.go:286] output of [docker network inspect embed-certs-348900]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: embed-certs-348900
** /stderr **
I0315 21:15:32.596667 11164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0315 21:15:32.951201 11164 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0315 21:15:32.983071 11164 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e77440}
I0315 21:15:32.983153 11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0315 21:15:32.994000 11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
I0315 21:15:29.902489 3304 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0315 21:15:30.410425 3304 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0315 21:15:30.439154 3304 kubeadm.go:322]
I0315 21:15:30.439418 3304 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0315 21:15:30.439418 3304 kubeadm.go:322]
I0315 21:15:30.440591 3304 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0315 21:15:30.440591 3304 kubeadm.go:322]
I0315 21:15:30.440591 3304 kubeadm.go:322] mkdir -p $HOME/.kube
I0315 21:15:30.440591 3304 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0315 21:15:30.440591 3304 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0315 21:15:30.441146 3304 kubeadm.go:322]
I0315 21:15:30.441302 3304 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0315 21:15:30.441302 3304 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0315 21:15:30.441302 3304 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0315 21:15:30.441302 3304 kubeadm.go:322]
I0315 21:15:30.442077 3304 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0315 21:15:30.442368 3304 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0315 21:15:30.442368 3304 kubeadm.go:322]
I0315 21:15:30.442768 3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
I0315 21:15:30.442976 3304 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
I0315 21:15:30.442976 3304 kubeadm.go:322] --control-plane
I0315 21:15:30.442976 3304 kubeadm.go:322]
I0315 21:15:30.442976 3304 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0315 21:15:30.442976 3304 kubeadm.go:322]
I0315 21:15:30.442976 3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
I0315 21:15:30.442976 3304 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716
I0315 21:15:30.449019 3304 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0315 21:15:30.449255 3304 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0315 21:15:30.449632 3304 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
I0315 21:15:30.449944 3304 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0315 21:15:30.449944 3304 cni.go:84] Creating CNI manager for ""
I0315 21:15:30.449944 3304 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0315 21:15:30.449944 3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:15:30.475844 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:30.480125 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:30.550685 3304 ops.go:34] apiserver oom_adj: -16
I0315 21:15:30.665183 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.674974 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.675152 1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
I0315 21:15:31.004595 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:31.271800 4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:15:32.105850 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet
I0315 21:15:32.876011 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubelet': No such file or directory
I0315 21:15:32.876276 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet --> /var/lib/minikube/binaries/v1.26.2/kubelet (121268472 bytes)
W0315 21:15:33.333982 11164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900 returned with exit code 1
W0315 21:15:33.334081 11164 network_create.go:148] failed to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900: exit status 1
stdout:
stderr:
Error response from daemon: Pool overlaps with other one on this address space
W0315 21:15:33.334145 11164 network_create.go:115] failed to create docker network embed-certs-348900 192.168.58.0/24, will retry: subnet is taken
I0315 21:15:33.379254 11164 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0315 21:15:33.406969 11164 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e10420}
I0315 21:15:33.406969 11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0315 21:15:33.416637 11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
I0315 21:15:33.931710 11164 network_create.go:107] docker network embed-certs-348900 192.168.67.0/24 created
I0315 21:15:33.931710 11164 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-348900" container
I0315 21:15:33.961692 11164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0315 21:15:34.382414 11164 cli_runner.go:164] Run: docker volume create embed-certs-348900 --label name.minikube.sigs.k8s.io=embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true
I0315 21:15:34.716016 11164 oci.go:103] Successfully created a docker volume embed-certs-348900
I0315 21:15:34.727122 11164 cli_runner.go:164] Run: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib
I0315 21:15:34.549401 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.0692845s)
I0315 21:15:34.549401 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0735649s)
I0315 21:15:34.575936 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:35.677911 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:36.689764 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:37.173919 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:37.680647 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:38.677808 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:36.012455 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:36.012558 1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
I0315 21:15:36.332781 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:38.718404 11164 cli_runner.go:217] Completed: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib: (3.9912367s)
I0315 21:15:38.718694 11164 oci.go:107] Successfully prepared a docker volume embed-certs-348900
I0315 21:15:38.718763 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:15:38.718763 11164 kic.go:190] Starting extracting preloaded images to volume ...
I0315 21:15:38.735548 11164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir
I0315 21:15:39.684705 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:40.178045 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.173794 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.681379 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:42.683323 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:43.182131 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.339223 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:41.339409 1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
I0315 21:15:41.739620 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.046130 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.046265 1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.784826 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.930024 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:1120] stopping kube-system containers ...
I0315 21:15:44.948612 1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:45.444192 1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
I0315 21:15:45.468741 1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
I0315 21:15:44.191394 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:45.685532 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:48.821222 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.1356462s)
I0315 21:15:48.821384 3304 kubeadm.go:1073] duration metric: took 18.3714764s to wait for elevateKubeSystemPrivileges.
I0315 21:15:48.821384 3304 kubeadm.go:403] StartCluster complete in 50.2400255s
I0315 21:15:48.821513 3304 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:48.821905 3304 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:15:48.825059 3304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:48.828077 3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:15:48.828077 3304 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:15:48.828879 3304 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0315 21:15:48.828800 3304 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-103800"
I0315 21:15:48.829118 3304 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-103800"
I0315 21:15:48.829179 3304 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-103800"
I0315 21:15:48.829179 3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-103800"
I0315 21:15:48.829300 3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
I0315 21:15:48.878494 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:48.879545 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:49.358108 3304 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0315 21:15:50.124359 4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 21:15:50.223962 4576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
I0315 21:15:50.297658 4576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 21:15:50.374920 4576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
I0315 21:15:50.483223 4576 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0315 21:15:50.503211 4576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:15:50.560996 4576 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000 for IP: 192.168.85.2
I0315 21:15:50.561164 4576 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.561830 4576 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
I0315 21:15:50.562026 4576 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
I0315 21:15:50.562749 4576 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key
I0315 21:15:50.562749 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt with IP's: []
I0315 21:15:49.456354 3304 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:15:49.456907 3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0315 21:15:49.478318 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:49.848514 3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
I0315 21:15:49.872375 3304 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-103800"
I0315 21:15:49.872629 3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
I0315 21:15:49.901466 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:49.934700 3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1066246s)
I0315 21:15:49.936184 3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0315 21:15:50.250574 3304 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0315 21:15:50.250698 3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0315 21:15:50.264810 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:50.363018 3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:15:50.573127 3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
I0315 21:15:51.185346 3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0315 21:15:52.041249 3304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-103800" context rescaled to 1 replicas
I0315 21:15:52.041249 3304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:15:52.050693 3304 out.go:177] * Verifying Kubernetes components...
I0315 21:15:52.068989 3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:15:52.931105 3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.994746s)
I0315 21:15:52.931105 3304 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1809688s)
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3586386s)
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4749945s)
I0315 21:15:53.547333 3304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0315 21:15:53.551130 3304 addons.go:499] enable addons completed in 4.7230615s: enabled=[storage-provisioner default-storageclass]
I0315 21:15:53.562222 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:53.866492 3304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-103800" to be "Ready" ...
I0315 21:15:53.933789 3304 node_ready.go:49] node "old-k8s-version-103800" has status "Ready":"True"
I0315 21:15:53.933928 3304 node_ready.go:38] duration metric: took 67.3813ms waiting for node "old-k8s-version-103800" to be "Ready" ...
I0315 21:15:53.933978 3304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:15:53.954978 3304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
I0315 21:15:55.263337 1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
I0315 21:15:55.280007 1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0315 21:15:50.791437 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt ...
I0315 21:15:50.811528 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: {Name:mk1a7714c10c13a7d5c8fb1098bc038f605ad5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.813206 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key ...
I0315 21:15:50.813206 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key: {Name:mk6d5b75048bc1f92c0f990335a0e77ae990113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.814115 4576 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c
I0315 21:15:50.814711 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0315 21:15:51.462758 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c ...
I0315 21:15:51.462758 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c: {Name:mkbe5d6759390ded2e92d33f951b55651f871d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.465635 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c ...
I0315 21:15:51.465635 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c: {Name:mkeabc19ce40a151a2335523f300cb2173b405a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.465984 4576 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt
I0315 21:15:51.467767 4576 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key
I0315 21:15:51.475866 4576 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key
I0315 21:15:51.475866 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt with IP's: []
I0315 21:15:51.587728 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt ...
I0315 21:15:51.587834 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt: {Name:mk7c62a1dda77e6dc05d2537ac317544e81f57a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.589765 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key ...
I0315 21:15:51.589848 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key: {Name:mk8190fc7ddb34a4dc4e27e4845c7aee9bb89866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.598260 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
W0315 21:15:51.600164 4576 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
I0315 21:15:51.600164 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0315 21:15:51.600164 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0315 21:15:51.600849 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0315 21:15:51.600849 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0315 21:15:51.601444 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
I0315 21:15:51.603533 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 21:15:51.706046 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0315 21:15:51.773521 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 21:15:51.835553 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0315 21:15:51.896596 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 21:15:51.961384 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0315 21:15:52.020772 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 21:15:52.161594 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0315 21:15:52.223729 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 21:15:52.295451 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
I0315 21:15:52.368796 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
I0315 21:15:52.440447 4576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 21:15:52.501633 4576 ssh_runner.go:195] Run: openssl version
I0315 21:15:52.539319 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
I0315 21:15:52.596897 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
I0315 21:15:52.617219 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
I0315 21:15:52.634012 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
I0315 21:15:52.676116 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
I0315 21:15:52.732985 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
I0315 21:15:52.795424 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
I0315 21:15:52.811657 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
I0315 21:15:52.824204 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
I0315 21:15:52.868586 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
I0315 21:15:52.920203 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 21:15:52.980456 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:52.999359 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:53.012117 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:53.068045 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0315 21:15:53.097602 4576 kubeadm.go:401] StartCluster: {Name:no-preload-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:15:53.106935 4576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:53.188443 4576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0315 21:15:53.248153 4576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:15:53.292225 4576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0315 21:15:53.310023 4576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:15:53.350373 4576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0315 21:15:53.350373 4576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0315 21:15:53.480709 4576 kubeadm.go:322] W0315 21:15:53.477710 2248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0315 21:15:53.619484 4576 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0315 21:15:53.941137 4576 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0315 21:15:56.130859 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:15:58.590590 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:15:55.667015 1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:15:55.884955 1332 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
I0315 21:15:55.906317 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0315 21:15:55.970490 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0315 21:15:56.077831 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0315 21:15:56.164837 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.189369 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0315 21:15:56.278633 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0315 21:15:56.350783 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.368651 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0315 21:15:56.472488 1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554151 1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554288 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:56.838520 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:58.821631 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
I0315 21:15:58.821631 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.241679 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.531884 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.837145 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:15:59.862394 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:00.562737 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.081569 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:03.528471 3304 pod_ready.go:92] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:03.528551 3304 pod_ready.go:81] duration metric: took 9.5735907s waiting for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.528551 3304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.557031 3304 pod_ready.go:92] pod "kube-proxy-cfcpx" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:03.557086 3304 pod_ready.go:81] duration metric: took 28.5355ms waiting for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.557086 3304 pod_ready.go:38] duration metric: took 9.623095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:03.557194 3304 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:16:03.572979 3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.613975 3304 api_server.go:71] duration metric: took 11.5727472s to wait for apiserver process to appear ...
I0315 21:16:03.613975 3304 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:03.613975 3304 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65314/healthz ...
I0315 21:16:03.643577 3304 api_server.go:278] https://127.0.0.1:65314/healthz returned 200:
ok
I0315 21:16:03.656457 3304 api_server.go:140] control plane version: v1.16.0
I0315 21:16:03.656457 3304 api_server.go:130] duration metric: took 42.4823ms to wait for apiserver health ...
I0315 21:16:03.656537 3304 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:03.667107 3304 system_pods.go:59] 3 kube-system pods found
I0315 21:16:03.667180 3304 system_pods.go:61] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:03.667180 3304 system_pods.go:61] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:03.667180 3304 system_pods.go:61] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:03.667180 3304 system_pods.go:74] duration metric: took 10.5957ms to wait for pod list to return data ...
I0315 21:16:03.667180 3304 default_sa.go:34] waiting for default service account to be created ...
I0315 21:16:03.676892 3304 default_sa.go:45] found service account: "default"
I0315 21:16:03.677053 3304 default_sa.go:55] duration metric: took 9.8734ms for default service account to be created ...
I0315 21:16:03.677104 3304 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 21:16:01.047261 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.561853 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.057572 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.554491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.060987 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.560744 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.058096 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.574094 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.054883 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.558867 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.285721 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.285721 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.285721 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.285721 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.285721 3304 retry.go:31] will retry after 219.526595ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:04.529762 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.529762 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.529762 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.529762 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.529762 3304 retry.go:31] will retry after 379.322135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:04.941567 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.941567 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.941567 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.941567 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.941567 3304 retry.go:31] will retry after 439.394592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:05.410063 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:05.410190 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:05.410190 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:05.410246 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:05.410246 3304 retry.go:31] will retry after 547.53451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:05.971998 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:05.971998 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:05.971998 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:05.971998 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:05.971998 3304 retry.go:31] will retry after 474.225372ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:06.466534 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:06.466718 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:06.466718 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:06.466718 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:06.466718 3304 retry.go:31] will retry after 680.585019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:07.175871 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:07.175871 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:07.175871 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:07.175871 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:07.175871 3304 retry.go:31] will retry after 979.191711ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:08.550247 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:08.550247 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:08.550247 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:08.550247 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:08.550247 3304 retry.go:31] will retry after 1.232453731s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:06.064030 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.559451 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.836193 1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
I0315 21:16:06.836348 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:06.836472 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:06.844702 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.349930 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:07.360047 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.852770 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:09.202438 11164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir: (30.466496s)
I0315 21:16:09.202651 11164 kic.go:199] duration metric: took 30.483946 seconds to extract preloaded images to volume
I0315 21:16:09.210313 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:16:10.155940 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:16:09.4164826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:16:10.165464 11164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0315 21:16:11.073846 11164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f
I0315 21:16:12.556246 11164 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f: (1.4822642s)
I0315 21:16:12.573402 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Running}}
I0315 21:16:12.899930 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:13.219648 11164 cli_runner.go:164] Run: docker exec embed-certs-348900 stat /var/lib/dpkg/alternatives/iptables
I0315 21:16:09.817018 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:09.817099 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:09.817128 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:09.817171 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:09.817212 3304 retry.go:31] will retry after 1.174345338s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:11.034520 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:11.034666 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:11.034666 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:11.034666 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:11.034865 3304 retry.go:31] will retry after 1.617952037s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:12.678044 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:12.678093 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:12.678161 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:12.678161 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:12.678161 3304 retry.go:31] will retry after 2.664928648s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:12.856341 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:16:13.355164 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.531052 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0315 21:16:13.531052 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:16:13.856894 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.948093 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:13.948207 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.353756 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.444021 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.444582 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.850032 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.881729 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.881822 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.359619 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.458273 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:15.458359 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.846895 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.875897 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:15.909269 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:15.909297 1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
I0315 21:16:15.909353 1332 cni.go:84] Creating CNI manager for ""
I0315 21:16:15.909353 1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:15.912744 1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 21:16:13.756342 11164 oci.go:144] the created container "embed-certs-348900" has a running status.
I0315 21:16:13.756477 11164 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
I0315 21:16:14.119932 11164 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0315 21:16:14.639346 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:14.940713 11164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0315 21:16:14.940713 11164 kic_runner.go:114] Args: [docker exec --privileged embed-certs-348900 chown docker:docker /home/docker/.ssh/authorized_keys]
I0315 21:16:15.500441 11164 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
I0315 21:16:16.178648 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:16.488888 11164 machine.go:88] provisioning docker machine ...
I0315 21:16:16.488888 11164 ubuntu.go:169] provisioning hostname "embed-certs-348900"
I0315 21:16:16.502911 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:16.840113 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:16.856244 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:16.856277 11164 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-348900 && echo "embed-certs-348900" | sudo tee /etc/hostname
I0315 21:16:17.147013 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348900
I0315 21:16:17.160758 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:17.464133 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:17.465429 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:17.465429 11164 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-348900' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348900/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-348900' | sudo tee -a /etc/hosts;
fi
fi
I0315 21:16:17.739135 11164 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0315 21:16:17.739135 11164 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0315 21:16:17.739135 11164 ubuntu.go:177] setting up certificates
I0315 21:16:17.739135 11164 provision.go:83] configureAuth start
I0315 21:16:17.755889 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:18.035724 11164 provision.go:138] copyHostCerts
I0315 21:16:18.036560 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0315 21:16:18.036560 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0315 21:16:18.037267 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0315 21:16:18.038895 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0315 21:16:18.038895 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0315 21:16:18.039720 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0315 21:16:18.041165 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0315 21:16:18.041165 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0315 21:16:18.041925 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0315 21:16:18.042745 11164 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-348900 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-348900]
I0315 21:16:15.383021 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:15.383097 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:15.383222 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:15.383222 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:15.383288 3304 retry.go:31] will retry after 2.578717787s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:17.995544 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:17.995544 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:17.995544 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:17.995544 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:17.997123 3304 retry.go:31] will retry after 3.689658526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:15.925415 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 21:16:15.965847 1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 21:16:16.079955 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:16.096342 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:16.096342 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0315 21:16:16.096342 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0315 21:16:16.096342 1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
I0315 21:16:16.096342 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:16.105140 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:16.105226 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:16.105269 1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
I0315 21:16:16.105316 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:16:17.333440 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
I0315 21:16:17.333615 1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0315 21:16:17.354686 1332 kubeadm.go:784] kubelet initialised
I0315 21:16:17.354754 1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
I0315 21:16:17.354822 1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:17.435085 1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:19.521467 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:18.251532 11164 provision.go:172] copyRemoteCerts
I0315 21:16:18.273974 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0315 21:16:18.283506 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:18.570902 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:18.768649 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0315 21:16:18.841686 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0315 21:16:18.905617 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0315 21:16:18.967699 11164 provision.go:86] duration metric: configureAuth took 1.2285308s
I0315 21:16:18.967770 11164 ubuntu.go:193] setting minikube options for container-runtime
I0315 21:16:18.968727 11164 config.go:182] Loaded profile config "embed-certs-348900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:18.979877 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:19.285905 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:19.286914 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:19.286979 11164 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0315 21:16:19.567687 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0315 21:16:19.567687 11164 ubuntu.go:71] root file system type: overlay
I0315 21:16:19.567687 11164 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0315 21:16:19.582813 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:19.874162 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:19.875396 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:19.875396 11164 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0315 21:16:20.174872 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0315 21:16:20.188182 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:20.453718 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:20.454944 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:20.454944 11164 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0315 21:16:22.142486 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-03-15 21:16:20.152689000 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0315 21:16:22.142486 11164 machine.go:91] provisioned docker machine in 5.6536091s
I0315 21:16:22.142486 11164 client.go:171] LocalClient.Create took 50.2614576s
I0315 21:16:22.142486 11164 start.go:167] duration metric: libmachine.API.Create for "embed-certs-348900" took 50.2615841s
I0315 21:16:22.142486 11164 start.go:300] post-start starting for "embed-certs-348900" (driver="docker")
I0315 21:16:22.142486 11164 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0315 21:16:22.164869 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0315 21:16:22.176134 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:22.457317 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:22.664346 11164 ssh_runner.go:195] Run: cat /etc/os-release
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0315 21:16:22.686266 11164 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0315 21:16:22.686266 11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
I0315 21:16:22.686902 11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
I0315 21:16:22.688699 11164 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
I0315 21:16:22.706595 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0315 21:16:22.738368 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
I0315 21:16:22.808162 11164 start.go:303] post-start completed in 665.6768ms
I0315 21:16:22.820367 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:23.085450 11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
I0315 21:16:23.099327 11164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0315 21:16:23.105640 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:21.705945 3304 system_pods.go:86] 4 kube-system pods found
I0315 21:16:21.706010 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:21.706103 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Pending
I0315 21:16:21.706103 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:21.706185 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:21.706219 3304 retry.go:31] will retry after 5.083561084s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:22.006711 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:24.016700 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:23.396840 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:23.581013 11164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0315 21:16:23.600663 11164 start.go:128] duration metric: createHost completed in 51.7244434s
I0315 21:16:23.600663 11164 start.go:83] releasing machines lock for "embed-certs-348900", held for 51.7253337s
I0315 21:16:23.612591 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:23.883432 11164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0315 21:16:23.894275 11164 ssh_runner.go:195] Run: cat /version.json
I0315 21:16:23.894535 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:23.897398 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:24.187980 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:24.211376 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:24.384184 11164 ssh_runner.go:195] Run: systemctl --version
I0315 21:16:24.554870 11164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0315 21:16:24.601965 11164 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
W0315 21:16:24.636442 11164 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
stdout:
stderr:
find: '\\etc\\cni\\net.d': No such file or directory
I0315 21:16:24.653193 11164 ssh_runner.go:195] Run: which cri-dockerd
I0315 21:16:24.687918 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0315 21:16:24.720950 11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0315 21:16:24.782057 11164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0315 21:16:24.838659 11164 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0315 21:16:24.838782 11164 start.go:485] detecting cgroup driver to use...
I0315 21:16:24.838782 11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:16:24.839372 11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:16:24.907810 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0315 21:16:24.962942 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0315 21:16:24.999607 11164 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0315 21:16:25.016372 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0315 21:16:25.084691 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:16:25.123717 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0315 21:16:25.175564 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:16:25.220146 11164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0315 21:16:25.283915 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0315 21:16:25.334938 11164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0315 21:16:25.388356 11164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0315 21:16:25.435298 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:25.641460 11164 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0315 21:16:25.860833 11164 start.go:485] detecting cgroup driver to use...
I0315 21:16:25.861441 11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:16:25.882735 11164 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0315 21:16:25.939579 11164 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0315 21:16:25.960420 11164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0315 21:16:26.059890 11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:16:26.183579 11164 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0315 21:16:26.466649 11164 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0315 21:16:26.677013 11164 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0315 21:16:26.677080 11164 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0315 21:16:26.756071 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:26.959814 11164 ssh_runner.go:195] Run: sudo systemctl restart docker
I0315 21:16:27.700313 11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:16:27.915578 11164 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0315 21:16:28.148265 11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:16:26.834333 3304 system_pods.go:86] 5 kube-system pods found
I0315 21:16:26.834442 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:26.834494 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
I0315 21:16:26.834494 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:26.834542 3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
I0315 21:16:26.834542 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:26.834542 3304 retry.go:31] will retry after 6.853083205s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:29.227662 4576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] Running pre-flight checks
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0315 21:16:29.229013 4576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0315 21:16:29.233640 4576 out.go:204] - Generating certificates and keys ...
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Using existing ca certificate authority
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0315 21:16:29.234862 4576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0315 21:16:29.235050 4576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0315 21:16:29.235155 4576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0315 21:16:29.235331 4576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0315 21:16:29.235774 4576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
I0315 21:16:29.235871 4576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0315 21:16:29.235871 4576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
I0315 21:16:29.236566 4576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0315 21:16:29.236865 4576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0315 21:16:29.237080 4576 kubeadm.go:322] [certs] Generating "sa" key and public key
I0315 21:16:29.237437 4576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0315 21:16:29.237659 4576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0315 21:16:29.237841 4576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0315 21:16:29.238095 4576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0315 21:16:29.238325 4576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0315 21:16:29.238639 4576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0315 21:16:29.238966 4576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0315 21:16:29.239000 4576 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0315 21:16:29.239299 4576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0315 21:16:29.244122 4576 out.go:204] - Booting up control plane ...
I0315 21:16:29.244122 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0315 21:16:29.244122 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0315 21:16:29.244875 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0315 21:16:29.245231 4576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0315 21:16:29.245856 4576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0315 21:16:29.246514 4576 kubeadm.go:322] [apiclient] All control plane components are healthy after 27.005043 seconds
I0315 21:16:29.247464 4576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0315 21:16:29.247889 4576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0315 21:16:29.247889 4576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0315 21:16:29.249317 4576 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-470000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0315 21:16:29.249647 4576 kubeadm.go:322] [bootstrap-token] Using token: g8jwe6.dtydkfj8fkgcjwxk
I0315 21:16:29.253362 4576 out.go:204] - Configuring RBAC rules ...
I0315 21:16:29.253362 4576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0315 21:16:29.253982 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0315 21:16:29.254534 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0315 21:16:29.254971 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0315 21:16:29.255290 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0315 21:16:29.255767 4576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0315 21:16:29.256101 4576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0315 21:16:29.256445 4576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0315 21:16:29.256697 4576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0315 21:16:29.256697 4576 kubeadm.go:322]
I0315 21:16:29.256697 4576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0315 21:16:29.256697 4576 kubeadm.go:322]
I0315 21:16:29.256697 4576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0315 21:16:29.257255 4576 kubeadm.go:322]
I0315 21:16:29.257312 4576 kubeadm.go:322] mkdir -p $HOME/.kube
I0315 21:16:29.257312 4576 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0315 21:16:29.258206 4576 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0315 21:16:29.258206 4576 kubeadm.go:322]
I0315 21:16:29.258392 4576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0315 21:16:29.258392 4576 kubeadm.go:322]
I0315 21:16:29.258392 4576 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0315 21:16:29.258392 4576 kubeadm.go:322]
I0315 21:16:29.259028 4576 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0315 21:16:29.259028 4576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0315 21:16:29.259028 4576 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0315 21:16:29.259586 4576 kubeadm.go:322]
I0315 21:16:29.259793 4576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0315 21:16:29.259793 4576 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0315 21:16:29.259793 4576 kubeadm.go:322]
I0315 21:16:29.260469 4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
I0315 21:16:29.260726 4576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
I0315 21:16:29.260890 4576 kubeadm.go:322] --control-plane
I0315 21:16:29.260890 4576 kubeadm.go:322]
I0315 21:16:29.261169 4576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0315 21:16:29.261228 4576 kubeadm.go:322]
I0315 21:16:29.261412 4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
I0315 21:16:29.261412 4576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716
I0315 21:16:29.261412 4576 cni.go:84] Creating CNI manager for ""
I0315 21:16:29.261412 4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:29.266347 4576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 21:16:28.373729 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:28.596843 11164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0315 21:16:28.641503 11164 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0315 21:16:28.659715 11164 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0315 21:16:28.687449 11164 start.go:553] Will wait 60s for crictl version
I0315 21:16:28.704098 11164 ssh_runner.go:195] Run: which crictl
I0315 21:16:28.753769 11164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0315 21:16:29.076356 11164 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 23.0.1
RuntimeApiVersion: v1alpha2
I0315 21:16:29.092004 11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:16:29.211116 11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:16:26.048179 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:28.050667 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:29.001447 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.001447 1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.001447 1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.028330 1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.057628 1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.092004 1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131434 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.131486 1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131486 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402295 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.402345 1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402345 1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:29.402386 1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:16:29.426130 1332 ops.go:34] apiserver oom_adj: -16
I0315 21:16:29.426187 1332 kubeadm.go:637] restartCluster took 1m4.338895s
I0315 21:16:29.426266 1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
I0315 21:16:29.426351 1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.426601 1332 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:16:29.429857 1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.432982 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:16:29.432982 1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:16:29.433680 1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:29.438415 1332 out.go:177] * Enabled addons:
I0315 21:16:29.443738 1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
I0315 21:16:29.452842 1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 21:16:29.467764 1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
I0315 21:16:29.467764 1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:16:29.470858 1332 out.go:177] * Verifying Kubernetes components...
I0315 21:16:29.484573 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:29.761590 1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0315 21:16:29.775423 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
I0315 21:16:30.117208 1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
I0315 21:16:30.134817 1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
I0315 21:16:30.134886 1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
I0315 21:16:30.135066 1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:30.162562 1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219441 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.219583 1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219583 1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608418 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.608458 1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608458 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.286357 4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 21:16:29.434851 4576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 21:16:29.759117 4576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:16:29.777121 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:29.784090 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:29.333720 11164 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
I0315 21:16:29.346161 11164 cli_runner.go:164] Run: docker exec -t embed-certs-348900 dig +short host.docker.internal
I0315 21:16:29.900879 11164 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0315 21:16:29.916562 11164 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0315 21:16:29.935552 11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:16:29.995136 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:30.338304 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:16:30.350351 11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:16:30.410968 11164 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:16:30.410997 11164 docker.go:560] Images already preloaded, skipping extraction
I0315 21:16:30.423332 11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:16:30.503657 11164 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:16:30.503657 11164 cache_images.go:84] Images are preloaded, skipping loading
I0315 21:16:30.514842 11164 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0315 21:16:30.592454 11164 cni.go:84] Creating CNI manager for ""
I0315 21:16:30.593071 11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:30.593126 11164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 21:16:30.593164 11164 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348900 NodeName:embed-certs-348900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 21:16:30.593164 11164 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "embed-certs-348900"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 21:16:30.593164 11164 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-348900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 21:16:30.608458 11164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0315 21:16:30.650429 11164 binaries.go:44] Found k8s binaries, skipping transfer
I0315 21:16:30.663574 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 21:16:30.692787 11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0315 21:16:30.740392 11164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 21:16:30.785258 11164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
I0315 21:16:30.856683 11164 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0315 21:16:30.874232 11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:16:30.910227 11164 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900 for IP: 192.168.67.2
I0315 21:16:30.910227 11164 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:30.910959 11164 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
I0315 21:16:30.910959 11164 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
I0315 21:16:30.912090 11164 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key
I0315 21:16:30.912245 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt with IP's: []
I0315 21:16:31.176322 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt ...
I0315 21:16:31.176322 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt: {Name:mk3adaad25efd04206f4069d51ba11c764eb6365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.185180 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key ...
I0315 21:16:31.186710 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key: {Name:mkf9f54f56133eba18d6e348fef5a1556121e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.186988 11164 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e
I0315 21:16:31.187994 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0315 21:16:31.980645 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e ...
I0315 21:16:31.980645 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e: {Name:mk2261dfadf80693084f767fa62cccae0b07268d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.987167 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e ...
I0315 21:16:31.987167 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e: {Name:mk003ae0b84dcfe7543e40c97ad15121d53cc917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.988356 11164 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt
I0315 21:16:31.999575 11164 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key
I0315 21:16:32.001372 11164 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key
I0315 21:16:32.001790 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt with IP's: []
I0315 21:16:32.228690 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt ...
I0315 21:16:32.228763 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt: {Name:mk6cbb1c106aa2dec99a9338908a5ea76d5206ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:32.230290 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key ...
I0315 21:16:32.230290 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key: {Name:mk5c3038fe2a59bd4ebdf1cb320d733f3de9b70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:32.243236 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
W0315 21:16:32.243866 11164 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
I0315 21:16:32.244089 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0315 21:16:32.244671 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0315 21:16:32.245081 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0315 21:16:32.245162 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0315 21:16:32.245850 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
I0315 21:16:32.248063 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 21:16:32.321659 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0315 21:16:32.402505 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 21:16:32.491666 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0315 21:16:32.579600 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 21:16:32.651879 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0315 21:16:32.716051 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 21:16:32.797235 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0315 21:16:32.885295 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
I0315 21:16:32.963869 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
I0315 21:16:33.029503 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 21:16:33.108304 11164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 21:16:33.169580 11164 ssh_runner.go:195] Run: openssl version
I0315 21:16:33.195467 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 21:16:33.230164 11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 21:16:31.017074 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.017074 1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.017074 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.395349 1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.792495 1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.219569 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:32.220120 1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.220120 1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:32.220120 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:16:32.232971 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:32.332638 1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
I0315 21:16:32.332638 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:32.332638 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:32.362918 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:32.430820 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:32.430820 1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
I0315 21:16:32.430820 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:32.455349 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:32.455486 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.455486 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.455544 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.455685 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.455785 1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
I0315 21:16:32.455785 1332 default_sa.go:34] waiting for default service account to be created ...
I0315 21:16:32.637154 1332 default_sa.go:45] found service account: "default"
I0315 21:16:32.637301 1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
I0315 21:16:32.637301 1332 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_pods.go:86] 6 kube-system pods found
I0315 21:16:32.844031 1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.844031 1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_svc.go:44] waiting for kubelet service to be running ....
I0315 21:16:32.858698 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:32.902525 1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
I0315 21:16:32.902598 1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0315 21:16:32.902669 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:33.016156 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:33.016241 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:33.016278 1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
I0315 21:16:33.016316 1332 start.go:228] waiting for startup goroutines ...
I0315 21:16:33.016316 1332 start.go:233] waiting for cluster config update ...
I0315 21:16:33.016351 1332 start.go:242] writing updated cluster config ...
I0315 21:16:33.039378 1332 ssh_runner.go:195] Run: rm -f paused
I0315 21:16:33.289071 1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
I0315 21:16:33.292949 1332 out.go:177]
W0315 21:16:33.295479 1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
I0315 21:16:33.297706 1332 out.go:177] - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
I0315 21:16:33.301501 1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
I0315 21:16:33.717595 3304 system_pods.go:86] 6 kube-system pods found
I0315 21:16:33.717595 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Pending
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
I0315 21:16:33.717595 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:33.717595 3304 retry.go:31] will retry after 7.396011667s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:31.527682 4576 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.7684448s)
I0315 21:16:31.527682 4576 ops.go:34] apiserver oom_adj: -16
I0315 21:16:31.527682 4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.7435955s)
I0315 21:16:31.528138 4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7509547s)
I0315 21:16:31.546907 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:32.651879 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:33.157563 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:33.663575 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:34.656851 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:35.154601 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:35.655087 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
*
* ==> Docker <==
* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:38 UTC. --
Mar 15 21:15:16 pause-073300 dockerd[5130]: time="2023-03-15T21:15:16.627341500Z" level=info msg="Loading containers: start."
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.180814100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.293764400Z" level=info msg="Loading containers: done."
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403670900Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403801700Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403820400Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403829500Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403946800Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.404077100Z" level=info msg="Daemon has completed initialization"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.495876500Z" level=info msg="[core] [Server #7] Server created" module=grpc
Mar 15 21:15:17 pause-073300 systemd[1]: Started Docker Application Container Engine.
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.517552200Z" level=info msg="API listen on [::]:2376"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.543627500Z" level=info msg="API listen on /var/run/docker.sock"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744692100Z" level=info msg="ignoring event" container=923853eff8e2f1864e6cfeaaffa94363f41b1b6d4244613c11e443d63b83f2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744884600Z" level=info msg="ignoring event" container=51f04c53d355992b4720b6fe3fb08eeebaffdc34d08262d17db9f24dc486c5f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.839172700Z" level=info msg="ignoring event" container=c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.840438600Z" level=info msg="ignoring event" container=e92b1a5d6d0c83422026888e04b4103fbb1a6aad2a814bd916a79bec7e5cb8d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.853642900Z" level=info msg="ignoring event" container=a35da045d30f2532ff1a5d88e989615ddf33df4f90272696757ca1b38c1a5eba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927068700Z" level=info msg="ignoring event" container=ed67a04efb8ec818ab6782a05f9c291801a4458a1a0233c184aaf80f6bd8a373 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927810400Z" level=info msg="ignoring event" container=95e8431f84471d1685f5d908a022789eb2644a61f5292997dfe306c1e9821c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.033930300Z" level=info msg="ignoring event" container=e722cf7eda6bbc9bcf453efc486e10336872ccd7d74dbeb91e51085c094b0009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.128698500Z" level=info msg="ignoring event" container=1f51fce69c226f17529256ccf645edbf972854fc5f36bf524dd8bb1a98d65d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.434269500Z" level=info msg="ignoring event" container=6824568445c66b1f085e714f1a98df4ca1f40f4f7f67ed8f6069fbde15fd4b87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:51 pause-073300 dockerd[5130]: time="2023-03-15T21:15:51.189996200Z" level=info msg="ignoring event" container=e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:55 pause-073300 dockerd[5130]: time="2023-03-15T21:15:55.079374900Z" level=info msg="ignoring event" container=0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c3986aec6e000 5185b96f0becf 11 seconds ago Running coredns 2 a5bac8046c295
b7b4669a56d5c 6f64e7135a6ec 13 seconds ago Running kube-proxy 2 0e90c4b9c88b9
aba41f11fdc83 fce326961ae2d 37 seconds ago Running etcd 2 f6e4108617808
571d485669178 db8f409d9a5d7 37 seconds ago Running kube-scheduler 2 cc13660f35478
e6bb3d9a35ff0 240e201d5b0d8 37 seconds ago Running kube-controller-manager 3 c468745ca2cf5
88f9444587356 63d3239c3c159 37 seconds ago Running kube-apiserver 3 5496303bf33fe
e3043962e5ef5 5185b96f0becf About a minute ago Exited coredns 1 51f04c53d3559
6824568445c66 fce326961ae2d About a minute ago Exited etcd 1 a35da045d30f2
95e8431f84471 db8f409d9a5d7 About a minute ago Exited kube-scheduler 1 923853eff8e2f
1f51fce69c226 240e201d5b0d8 About a minute ago Exited kube-controller-manager 2 e722cf7eda6bb
c2ad60cad36db 6f64e7135a6ec About a minute ago Exited kube-proxy 1 e92b1a5d6d0c8
0cb5567e32abb 63d3239c3c159 About a minute ago Exited kube-apiserver 2 ed67a04efb8ec
*
* ==> coredns [c3986aec6e00] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:39857 - 53557 "HINFO IN 4117550418294164078.6192551117797702913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0986876s
*
* ==> coredns [e3043962e5ef] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:58165 - 40858 "HINFO IN 6114658028450402923.1632777775304523244. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0560197s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: pause-073300
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-073300
kubernetes.io/os=linux
minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e
minikube.k8s.io/name=pause-073300
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_15T21_14_05_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Mar 2023 21:13:54 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-073300
AcquireTime: <unset>
RenewTime: Wed, 15 Mar 2023 21:16:34 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:14:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.103.2
Hostname: pause-073300
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: b1932dc991aa41bd806e459062926d45
System UUID: b1932dc991aa41bd806e459062926d45
Boot ID: c49fbee3-0cdd-49eb-8984-31df821a263f
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://23.0.1
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-2q246 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 2m21s
kube-system etcd-pause-073300 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 2m38s
kube-system kube-apiserver-pause-073300 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m38s
kube-system kube-controller-manager-pause-073300 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m39s
kube-system kube-proxy-m4md5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m21s
kube-system kube-scheduler-pause-073300 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m30s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (4%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m13s kube-proxy
Normal Starting 12s kube-proxy
Normal NodeHasSufficientPID 3m8s (x7 over 3m9s) kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 3m8s (x8 over 3m9s) kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 3m8s (x8 over 3m9s) kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal Starting 2m33s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m33s kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m33s kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m33s kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeNotReady 2m32s kubelet Node pause-073300 status is now: NodeNotReady
Normal NodeReady 2m31s kubelet Node pause-073300 status is now: NodeReady
Normal NodeAllocatableEnforced 2m31s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m22s node-controller Node pause-073300 event: Registered Node pause-073300 in Controller
Normal Starting 39s kubelet Starting kubelet.
Normal NodeHasSufficientPID 38s (x7 over 38s) kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 38s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 37s (x8 over 38s) kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 37s (x8 over 38s) kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 10s node-controller Node pause-073300 event: Registered Node pause-073300 in Controller
*
* ==> dmesg <==
* [Mar15 20:45] WSL2: Performing memory compaction.
[Mar15 20:47] WSL2: Performing memory compaction.
[Mar15 20:48] WSL2: Performing memory compaction.
[Mar15 20:49] WSL2: Performing memory compaction.
[Mar15 20:51] WSL2: Performing memory compaction.
[Mar15 20:52] WSL2: Performing memory compaction.
[Mar15 20:53] WSL2: Performing memory compaction.
[Mar15 20:54] WSL2: Performing memory compaction.
[Mar15 20:56] WSL2: Performing memory compaction.
[Mar15 20:57] WSL2: Performing memory compaction.
[Mar15 20:58] WSL2: Performing memory compaction.
[Mar15 20:59] WSL2: Performing memory compaction.
[Mar15 21:00] WSL2: Performing memory compaction.
[Mar15 21:01] WSL2: Performing memory compaction.
[Mar15 21:03] WSL2: Performing memory compaction.
[ +24.007152] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Mar15 21:04] process 'docker/tmp/qemu-check145175011/check' started with executable stack
[ +21.555954] WSL2: Performing memory compaction.
[Mar15 21:06] WSL2: Performing memory compaction.
[Mar15 21:07] hrtimer: interrupt took 920300 ns
[Mar15 21:09] WSL2: Performing memory compaction.
[Mar15 21:11] WSL2: Performing memory compaction.
[Mar15 21:12] WSL2: Performing memory compaction.
[Mar15 21:13] WSL2: Performing memory compaction.
[Mar15 21:15] WSL2: Performing memory compaction.
*
* ==> etcd [6824568445c6] <==
* {"level":"info","ts":"2023-03-15T21:15:44.027Z","caller":"traceutil/trace.go:171","msg":"trace[2137636385] transaction","detail":"{read_only:false; number_of_response:1; response_revision:415; }","duration":"100.9434ms","start":"2023-03-15T21:15:43.926Z","end":"2023-03-15T21:15:44.027Z","steps":["trace[2137636385] 'process raft request' (duration: 100.5034ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.540Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873768454336989569 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" value_size:582 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" > >>","response":"size:16"}
{"level":"info","ts":"2023-03-15T21:15:44.541Z","caller":"traceutil/trace.go:171","msg":"trace[493093877] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"109.7451ms","start":"2023-03-15T21:15:44.431Z","end":"2023-03-15T21:15:44.541Z","steps":["trace[493093877] 'process raft request' (duration: 109.4963ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[455019656] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"198.5194ms","start":"2023-03-15T21:15:44.343Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[455019656] 'process raft request' (duration: 87.4089ms)","trace[455019656] 'compare' (duration: 106.3186ms)"],"step_count":2}
{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[1743432337] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"112.8874ms","start":"2023-03-15T21:15:44.430Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[1743432337] 'read index received' (duration: 852.1µs)","trace[1743432337] 'applied index is now lower than readState.Index' (duration: 112.0303ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-15T21:15:44.544Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.3137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-node-lease\" ","response":"range_response_count:1 size:363"}
{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[83833859] range","detail":"{range_begin:/registry/namespaces/kube-node-lease; range_end:; response_count:1; response_revision:418; }","duration":"115.03ms","start":"2023-03-15T21:15:44.429Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[83833859] 'agreement among raft nodes before linearized reading' (duration: 113.2035ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.545Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.3129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[1382087029] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:418; }","duration":"111.3651ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[1382087029] 'agreement among raft nodes before linearized reading' (duration: 111.2411ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
{"level":"info","ts":"2023-03-15T21:15:44.547Z","caller":"traceutil/trace.go:171","msg":"trace[1257507815] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:418; }","duration":"113.4898ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.547Z","steps":["trace[1257507815] 'agreement among raft nodes before linearized reading' (duration: 113.3486ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[1166219815] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:446; }","duration":"121.4317ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[1166219815] 'read index received' (duration: 3.7558ms)","trace[1166219815] 'applied index is now lower than readState.Index' (duration: 117.6698ms)"],"step_count":2}
{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[513205189] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"125.7589ms","start":"2023-03-15T21:15:44.830Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[513205189] 'process raft request' (duration: 94.9828ms)","trace[513205189] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-m4md5; req_size:4522; } (duration: 27.8113ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-15T21:15:44.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.7804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
{"level":"info","ts":"2023-03-15T21:15:44.958Z","caller":"traceutil/trace.go:171","msg":"trace[1937091289] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:421; }","duration":"123.6279ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.958Z","steps":["trace[1937091289] 'agreement among raft nodes before linearized reading' (duration: 121.5636ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.965Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.7433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:64 size:57899"}
{"level":"info","ts":"2023-03-15T21:15:44.965Z","caller":"traceutil/trace.go:171","msg":"trace[1243225417] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:421; }","duration":"129.8213ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.965Z","steps":["trace[1243225417] 'agreement among raft nodes before linearized reading' (duration: 123.3525ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2023-03-15T21:15:46.436Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
{"level":"info","ts":"2023-03-15T21:15:46.534Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
*
* ==> etcd [aba41f11fdc8] <==
* {"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-15T21:16:07.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.138Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-073300 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T21:16:09.143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-15T21:16:09.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.103.2:2379"}
*
* ==> kernel <==
* 21:16:39 up 1:24, 0 users, load average: 11.44, 9.76, 6.44
Linux pause-073300 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [0cb5567e32ab] <==
* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:54.852783 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:55.001054 1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:55.018769 1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-apiserver [88f944458735] <==
* I0315 21:16:13.321430 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0315 21:16:13.321719 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I0315 21:16:13.322696 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0315 21:16:13.320670 1 crd_finalizer.go:266] Starting CRDFinalizer
I0315 21:16:13.320231 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0315 21:16:13.324414 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0315 21:16:13.437824 1 shared_informer.go:280] Caches are synced for configmaps
I0315 21:16:13.525354 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0315 21:16:13.623881 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0315 21:16:13.624222 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0315 21:16:13.624252 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0315 21:16:13.624258 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0315 21:16:13.624333 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0315 21:16:13.625322 1 shared_informer.go:280] Caches are synced for node_authorizer
I0315 21:16:13.625384 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0315 21:16:13.625410 1 cache.go:39] Caches are synced for autoregister controller
I0315 21:16:13.630897 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0315 21:16:14.357698 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0315 21:16:16.572417 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0315 21:16:16.602561 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0315 21:16:16.951884 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0315 21:16:17.136478 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0315 21:16:17.246459 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0315 21:16:28.244694 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0315 21:16:28.342519 1 controller.go:615] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [1f51fce69c22] <==
* I0315 21:15:33.346996 1 serving.go:348] Generated self-signed cert in-memory
I0315 21:15:39.060876 1 controllermanager.go:182] Version: v1.26.2
I0315 21:15:39.061047 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:15:39.072013 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0315 21:15:39.072120 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0315 21:15:39.072625 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:15:39.072677 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
*
* ==> kube-controller-manager [e6bb3d9a35ff] <==
* I0315 21:16:28.124587 1 shared_informer.go:280] Caches are synced for cidrallocator
I0315 21:16:28.124592 1 shared_informer.go:280] Caches are synced for crt configmap
I0315 21:16:28.124598 1 shared_informer.go:280] Caches are synced for endpoint
I0315 21:16:28.124661 1 shared_informer.go:280] Caches are synced for HPA
I0315 21:16:28.124898 1 shared_informer.go:280] Caches are synced for GC
I0315 21:16:28.124186 1 shared_informer.go:280] Caches are synced for taint
I0315 21:16:28.125247 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0315 21:16:28.125313 1 taint_manager.go:211] "Sending events to api server"
I0315 21:16:28.125358 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0315 21:16:28.125464 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-073300. Assuming now as a timestamp.
I0315 21:16:28.125524 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0315 21:16:28.126198 1 event.go:294] "Event occurred" object="pause-073300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-073300 event: Registered Node pause-073300 in Controller"
I0315 21:16:28.126964 1 shared_informer.go:280] Caches are synced for stateful set
I0315 21:16:28.227084 1 shared_informer.go:280] Caches are synced for namespace
I0315 21:16:28.227137 1 shared_informer.go:280] Caches are synced for disruption
I0315 21:16:28.227298 1 shared_informer.go:280] Caches are synced for deployment
I0315 21:16:28.227547 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0315 21:16:28.227631 1 shared_informer.go:280] Caches are synced for service account
I0315 21:16:28.229520 1 shared_informer.go:273] Waiting for caches to sync for garbage collector
I0315 21:16:28.233560 1 shared_informer.go:280] Caches are synced for resource quota
I0315 21:16:28.236781 1 shared_informer.go:280] Caches are synced for resource quota
I0315 21:16:28.529112 1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:16:28.534472 1 shared_informer.go:280] Caches are synced for garbage collector
I0315 21:16:28.562852 1 shared_informer.go:280] Caches are synced for garbage collector
I0315 21:16:28.562973 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [b7b4669a56d5] <==
* I0315 21:16:25.942265 1 node.go:163] Successfully retrieved node IP: 192.168.103.2
I0315 21:16:25.944192 1 server_others.go:109] "Detected node IP" address="192.168.103.2"
I0315 21:16:25.944360 1 server_others.go:535] "Using iptables proxy"
I0315 21:16:26.134212 1 server_others.go:176] "Using iptables Proxier"
I0315 21:16:26.134360 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0315 21:16:26.134376 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0315 21:16:26.134395 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0315 21:16:26.134427 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0315 21:16:26.135408 1 server.go:655] "Version info" version="v1.26.2"
I0315 21:16:26.135540 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:16:26.136322 1 config.go:317] "Starting service config controller"
I0315 21:16:26.136477 1 shared_informer.go:273] Waiting for caches to sync for service config
I0315 21:16:26.136504 1 config.go:226] "Starting endpoint slice config controller"
I0315 21:16:26.136526 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0315 21:16:26.136357 1 config.go:444] "Starting node config controller"
I0315 21:16:26.137498 1 shared_informer.go:273] Waiting for caches to sync for node config
I0315 21:16:26.236790 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0315 21:16:26.238214 1 shared_informer.go:280] Caches are synced for node config
I0315 21:16:26.238275 1 shared_informer.go:280] Caches are synced for service config
*
* ==> kube-proxy [c2ad60cad36d] <==
* E0315 21:15:29.627155 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
E0315 21:15:30.826046 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
E0315 21:15:43.235847 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": net/http: TLS handshake timeout
*
* ==> kube-scheduler [571d48566917] <==
* I0315 21:16:07.677853 1 serving.go:348] Generated self-signed cert in-memory
I0315 21:16:13.656832 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
I0315 21:16:13.656978 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:16:13.756221 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0315 21:16:13.756343 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0315 21:16:13.758353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0315 21:16:13.758370 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:16:13.759778 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0315 21:16:13.759904 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:16:13.757625 1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
I0315 21:16:13.758377 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0315 21:16:13.924166 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0315 21:16:13.924382 1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
I0315 21:16:13.924585 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [95e8431f8447] <==
* I0315 21:15:34.052612 1 serving.go:348] Generated self-signed cert in-memory
W0315 21:15:44.136305 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0315 21:15:44.140386 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0315 21:15:44.225673 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0315 21:15:44.225720 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0315 21:15:44.445561 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
I0315 21:15:44.445741 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:15:44.453477 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0315 21:15:44.455841 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0315 21:15:44.456010 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:15:44.456059 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:15:44.925804 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:15:46.348879 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0315 21:15:46.350010 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0315 21:15:46.352703 1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
I0315 21:15:46.355076 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0315 21:15:46.355314 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:39 UTC. --
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.766354 7548 kubelet_node_status.go:73] "Successfully registered node" node="pause-073300"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.826254 7548 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.828960 7548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.830844 7548 apiserver.go:52] "Watching apiserver"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846713 7548 topology_manager.go:210] "Topology Admit Handler"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846988 7548 topology_manager.go:210] "Topology Admit Handler"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.925554 7548 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944245 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-proxy\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944547 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-xtables-lock\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944610 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-lib-modules\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944669 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vbb\" (UniqueName: \"kubernetes.io/projected/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-api-access-b7vbb\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945094 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-config-volume\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945520 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnj9\" (UniqueName: \"kubernetes.io/projected/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-kube-api-access-mbnj9\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945563 7548 reconciler.go:41] "Reconciler: start to sync state"
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.149192 7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.150324 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154149 7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154342 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154347 7548 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vol
umeMount{Name:kube-api-access-b7vbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-m4md5_kube-system(428ae579-2b68-4526-a2b0-d8bb5922870f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.155707 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-m4md5" podUID=428ae579-2b68-4526-a2b0-d8bb5922870f
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.763783 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768377 7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768684 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
Mar 15 21:16:25 pause-073300 kubelet[7548]: I0315 21:16:25.248648 7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
Mar 15 21:16:27 pause-073300 kubelet[7548]: I0315 21:16:27.246680 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300: (2.1781542s)
helpers_test.go:261: (dbg) Run: kubectl --context pause-073300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-073300
helpers_test.go:235: (dbg) docker inspect pause-073300:
-- stdout --
[
{
"Id": "8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af",
"Created": "2023-03-15T21:12:57.6447279Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 235036,
"ExitCode": 0,
"Error": "",
"StartedAt": "2023-03-15T21:13:02.5343301Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:c2228ee73b919fe6986a8848f936a81a268f0e56f65fc402964f596a1336d16b",
"ResolvConfPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hostname",
"HostsPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/hosts",
"LogPath": "/var/lib/docker/containers/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af-json.log",
"Name": "/pause-073300",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"pause-073300:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-073300",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "0"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 2147483648,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b-init/diff:/var/lib/docker/overlay2/dd4a105805e89f3781ba34ad53d0a86096f0b864f9eade98210c90b3db11e614/diff:/var/lib/docker/overlay2/85f05c8966ab20f24eea0cadf9b702a2755c1a700aee4fcacd3754b8fa7f8a91/diff:/var/lib/docker/overlay2/b2c60f67ad52427067a519010db687573f6b5b01526e9e9493d88bbb3dcaf069/diff:/var/lib/docker/overlay2/ca870ef465e163b19b7e0ef24b89c201cc7cfe12753a6ca6a515827067e4fc98/diff:/var/lib/docker/overlay2/f55801eccf5ae4ff6206eaaaca361e1d9bfadc5759172bb8072e835b0002419b/diff:/var/lib/docker/overlay2/3da247e6db7b0c502d6067a49cfb704f596cd5fe9a3a874f6888ae9cc2373233/diff:/var/lib/docker/overlay2/f0dcb6d169a751860b7c097c666afe3d8fba3aac20d90e95b7f85913b7d1fda7/diff:/var/lib/docker/overlay2/a0c906b3378b625d84a7a2d043cc982545599c488b72767e2b4822211ddee871/diff:/var/lib/docker/overlay2/1380f7e23737bb69bab3e1c3b37fff4a603a1096ba1e984f2808fdb9fc5664b7/diff:/var/lib/docker/overlay2/f09380
dffb1afe5e97599b999b6d05a1d0b97490fc3afb897018955e3589ddf0/diff:/var/lib/docker/overlay2/12504a4aab3b43a1624555c565265eb2a252f3cc64b5942527ead795f1b46742/diff:/var/lib/docker/overlay2/2f17a40545e098dc56e6667d78dfde761f9ae57ff4c2dcab77a6135abc29f050/diff:/var/lib/docker/overlay2/378841db26151d8a66f60032a9366d4572aeb0fd0db1c1af9429abf5d7b6ab82/diff:/var/lib/docker/overlay2/14ee7241acf63b7e56e700bccdbcc29bd6530ebd357799238641498ccb978bc1/diff:/var/lib/docker/overlay2/0e384b8276413ac21818038eacaf3da54a8ac43c6ccef737b2c4e70e568fe287/diff:/var/lib/docker/overlay2/66beff05ea52aebfaea737c44ff3da16f742e7e2577ccea2c1fe954085a1e7f4/diff:/var/lib/docker/overlay2/fe7b0a2c7d3f1889e322a156881a5066e5e784dc1888fbf172b4beada499c14a/diff:/var/lib/docker/overlay2/bf3118300571672a5d3b839bbbbaa42516c05f16305f5b944d88d38687857207/diff:/var/lib/docker/overlay2/d1326cf983418efce550556b370f71d9b4d9e6671a9267ea6433967dcafff129/diff:/var/lib/docker/overlay2/cc4d1369146bbaac53f23e5cb8e072c195a8c109396c1f305d9a90dbcb491d62/diff:/var/lib/d
ocker/overlay2/20a6a00f4e15b51632a8a26911faf3243318c3e7bd9266fe9c926ca6070526a8/diff:/var/lib/docker/overlay2/6a6bfa0be9e2c1a0aa9fa555897c7f62f7c23b782a2117560731f10b833692a0/diff:/var/lib/docker/overlay2/0d9ed53179f81c8d2e276195863f6ac1ba99be69a7217caa97c19fe1121b0d38/diff:/var/lib/docker/overlay2/f9e70916967de3d00f48ca66d15ec3af34bd3980334b7ecb8950be0a5aee2e5e/diff:/var/lib/docker/overlay2/8a3ebe53f0b355704a58efda53f1dcf8ae0099f0a7947c748e7c447044baed05/diff:/var/lib/docker/overlay2/f6841f5c7deb52ba587f1365fd0bc48fe4334bd9678f4846740d9e4f3df386c4/diff:/var/lib/docker/overlay2/7729eb6c4bb6c79eae923e1946b180dcdb33aa85c259a8a21b46994e681a329f/diff:/var/lib/docker/overlay2/86ccbe980393e3c2dea4faf1f5b45fa86ac8f47190cf4fb3ebb23d5fd6687d44/diff:/var/lib/docker/overlay2/48b28921897a52ef79e37091b3d3df88fa4e01604e3a63d7e3dbbd72e551797c/diff:/var/lib/docker/overlay2/b9f9c70e4945260452936930e508cb1e7d619927da4230c7b792e5908a93ec46/diff:/var/lib/docker/overlay2/39f84637efc722da57b6de997d757e8709af3d48f8cba3da8848d3674aa
7ba4d/diff:/var/lib/docker/overlay2/9d81ba80e5128eb395bcffc7b56889c3d18172c222e637671a4b3c12c0a72afd/diff:/var/lib/docker/overlay2/03583facbdd50e79e467eb534dfcbe3d5e47aef4b25195138b3c0134ebd7f07e/diff:/var/lib/docker/overlay2/38e991cef8fb39c883da64e57775232fd1df5a4c67f32565e747b7363f336632/diff:/var/lib/docker/overlay2/0e0ebf6f489a93585842ec4fef7d044da67fd8a9504f91fe03cc03c6928134b8/diff:/var/lib/docker/overlay2/dedec87bbba9e6a1a68a159c167cac4c10a25918fa3d00630d6570db2ca290eb/diff:/var/lib/docker/overlay2/dc09130400d9f44a28862a6484b44433985893e9a8f49df62c38c0bd6b5e4e2c/diff:/var/lib/docker/overlay2/f00d229f6d9f2960571b2e1c365f30bd680b686c0d4569b5190c072a626c6811/diff:/var/lib/docker/overlay2/1a9993f098965bbd60b6e43b5998e4fcae02f81d65cc863bd8f6e29f7e2b8426/diff:/var/lib/docker/overlay2/500f950cf1835311103c129d3c1487e8e6b917ad928788ee14527cd8342c544f/diff:/var/lib/docker/overlay2/018feb310d5aa53cd6175c82f8ca56d22b3c1ad26ae5cfda5f6e3b56ca3919e6/diff:/var/lib/docker/overlay2/f84198610374e88e1ba6917bf70c8d9cea6ede
68b5fb4852c7eebcb536a12a83/diff",
"MergedDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/merged",
"UpperDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/diff",
"WorkDir": "/var/lib/docker/overlay2/daf89d89f16ecbd4935a7a509e1ebcf567d4c7992b1f3939dc1333e423f6287b/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "pause-073300",
"Source": "/var/lib/docker/volumes/pause-073300/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "pause-073300",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-073300",
"name.minikube.sigs.k8s.io": "pause-073300",
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "20.04",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "c465f6b5b8ea2cbabcd582f953a2ee6755ba6c0b6db6fbc3b931a291aafae975",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65160"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65161"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65163"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65164"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "65165"
}
]
},
"SandboxKey": "/var/run/docker/netns/c465f6b5b8ea",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-073300": {
"IPAMConfig": {
"IPv4Address": "192.168.103.2"
},
"Links": null,
"Aliases": [
"8be68eee5af2",
"pause-073300"
],
"NetworkID": "e97288cdb8ed8d3c843be70e49117f727e8c88772310c60f193237b2f3d2167f",
"EndpointID": "7dff20190b061cfe2a0b46f43c2f9a085fd94900413646e6b074cab27b5ac50e",
"Gateway": "192.168.103.1",
"IPAddress": "192.168.103.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:67:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300
E0315 21:16:45.025974 8812 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-553600\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-073300 -n pause-073300: (2.5347633s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-073300 logs -n 25: (4.3846717s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat cri-docker | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/systemd/system/cri-docker.service.d/10-cni.conf | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /usr/lib/systemd/system/cri-docker.service | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | cri-dockerd --version | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl status containerd | | | | | |
| | --all --full --no-pager | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat containerd | | | | | |
| | --no-pager | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /lib/systemd/system/containerd.service | | | | | |
| ssh | -p cilium-899600 sudo cat | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/containerd/config.toml | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-899600 sudo | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-899600 sudo find | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-899600 sudo crio | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | config | | | | | |
| delete | -p cilium-899600 | cilium-899600 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| start | -p force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:15 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr -v=5 | | | | | |
| | --driver=docker | | | | | |
| ssh | cert-options-298900 ssh | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-298900 -- sudo | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-298900 | cert-options-298900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| delete | -p cert-expiration-023900 | cert-expiration-023900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | 15 Mar 23 21:13 UTC |
| start | -p old-k8s-version-103800 | old-k8s-version-103800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| start | -p no-preload-470000 | no-preload-470000 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --kubernetes-version=v1.26.2 | | | | | |
| start | -p pause-073300 | pause-073300 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:14 UTC | 15 Mar 23 21:16 UTC |
| | --alsologtostderr -v=1 | | | | | |
| | --driver=docker | | | | | |
| ssh | force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
| | ssh docker info --format | | | | | |
| | {{.CgroupDriver}} | | | | | |
| delete | -p force-systemd-env-387800 | force-systemd-env-387800 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | 15 Mar 23 21:15 UTC |
| start | -p embed-certs-348900 | embed-certs-348900 | minikube1\jenkins | v1.29.0 | 15 Mar 23 21:15 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --kubernetes-version=v1.26.2 | | | | | |
|---------|------------------------------------------------------|--------------------------|-------------------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2023/03/15 21:15:28
Running on machine: minikube1
Binary: Built with gc go1.20.2 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0315 21:15:28.142992 11164 out.go:296] Setting OutFile to fd 1840 ...
I0315 21:15:28.223401 11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:15:28.223401 11164 out.go:309] Setting ErrFile to fd 1952...
I0315 21:15:28.223401 11164 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0315 21:15:28.262334 11164 out.go:303] Setting JSON to false
I0315 21:15:28.267297 11164 start.go:125] hostinfo: {"hostname":"minikube1","uptime":24330,"bootTime":1678890597,"procs":152,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2728 Build 19045.2728","kernelVersion":"10.0.19045.2728 Build 19045.2728","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
W0315 21:15:28.269446 11164 start.go:133] gopshost.Virtualization returned error: not implemented yet
I0315 21:15:28.271110 11164 out.go:177] * [embed-certs-348900] minikube v1.29.0 on Microsoft Windows 10 Enterprise N 10.0.19045.2728 Build 19045.2728
I0315 21:15:28.276466 11164 notify.go:220] Checking for updates...
I0315 21:15:28.279987 11164 out.go:177] - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:15:28.284307 11164 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0315 21:15:28.287394 11164 out.go:177] - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
I0315 21:15:28.289437 11164 out.go:177] - MINIKUBE_LOCATION=16056
I0315 21:15:28.293408 11164 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0315 21:15:27.107652 3304 kubeadm.go:322] [apiclient] All control plane components are healthy after 22.564526 seconds
I0315 21:15:27.107905 3304 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0315 21:15:27.174450 3304 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I0315 21:15:27.850318 3304 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0315 21:15:27.850318 3304 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-103800 as control-plane by adding the label "node-role.kubernetes.io/master=''"
I0315 21:15:28.451439 3304 kubeadm.go:322] [bootstrap-token] Using token: 1vsykl.s1ca43i7aq3le3xp
I0315 21:15:28.454827 3304 out.go:204] - Configuring RBAC rules ...
I0315 21:15:28.455102 3304 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0315 21:15:28.540595 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0315 21:15:28.708614 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0315 21:15:28.750604 3304 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0315 21:15:28.768374 3304 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0315 21:15:28.296206 11164 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:15:28.296901 11164 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0315 21:15:28.296901 11164 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:15:28.297434 11164 driver.go:365] Setting default libvirt URI to qemu:///system
I0315 21:15:28.716358 11164 docker.go:121] docker version: linux-20.10.23:Docker Desktop 4.17.0 (99724)
I0315 21:15:28.733128 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:15:30.097371 11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3641858s)
I0315 21:15:30.098315 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:28.9739466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:15:30.101949 11164 out.go:177] * Using the docker driver based on user configuration
I0315 21:15:25.993789 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.001803 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.031907 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.498885 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:26.520413 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:26.747568 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:26.998577 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.005491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.038573 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:27.494449 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:27.510680 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:27.648998 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.001209 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.016866 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.252092 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:28.497926 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:28.519187 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:28.938518 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.005873 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.022195 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0315 21:15:29.437505 1332 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I0315 21:15:29.498878 1332 api_server.go:165] Checking apiserver status ...
I0315 21:15:29.509169 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:15:29.790027 1332 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6279/cgroup
I0315 21:15:30.138061 1332 api_server.go:181] apiserver freezer: "20:freezer:/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16"
I0315 21:15:30.167896 1332 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be68eee5af20204bdbd885871e98fc65b3fc154c83a3331ce4341ad26fcc1af/kubepods/burstable/podd4d4a3bea62ddb6580910d9ea0aba8c6/0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16/freezer.state
I0315 21:15:30.342651 1332 api_server.go:203] freezer state: "THAWED"
I0315 21:15:30.342651 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.356716 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.356862 1332 retry.go:31] will retry after 297.564807ms: state is "Stopped"
I0315 21:15:28.433538 4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.6-0 | docker load": (19.494237s)
I0315 21:15:28.433538 4576 cache_images.go:315] Transferred and loaded C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.5.6-0 from cache
I0315 21:15:28.433538 4576 cache_images.go:123] Successfully loaded all cached images
I0315 21:15:28.434115 4576 cache_images.go:92] LoadImages completed in 1m0.6105675s
I0315 21:15:28.453600 4576 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0315 21:15:28.577481 4576 cni.go:84] Creating CNI manager for ""
I0315 21:15:28.577553 4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:15:28.577553 4576 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 21:15:28.577617 4576 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-470000 NodeName:no-preload-470000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 21:15:28.577869 4576 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "no-preload-470000"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 21:15:28.577869 4576 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=no-preload-470000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 21:15:28.591514 4576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0315 21:15:28.640201 4576 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.26.2: Process exited with status 2
stdout:
stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.26.2': No such file or directory
Initiating transfer...
I0315 21:15:28.658006 4576 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.26.2
I0315 21:15:28.718165 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl
I0315 21:15:28.718374 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm
I0315 21:15:28.718374 4576 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.26.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet
I0315 21:15:30.110051 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm
I0315 21:15:30.131361 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubeadm: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubeadm': No such file or directory
I0315 21:15:30.131361 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubeadm --> /var/lib/minikube/binaries/v1.26.2/kubeadm (46768128 bytes)
I0315 21:15:30.168761 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl
I0315 21:15:30.671927 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubectl: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubectl': No such file or directory
I0315 21:15:30.672203 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubectl --> /var/lib/minikube/binaries/v1.26.2/kubectl (48029696 bytes)
I0315 21:15:30.105668 11164 start.go:296] selected driver: docker
I0315 21:15:30.105668 11164 start.go:857] validating driver "docker" against <nil>
I0315 21:15:30.105668 11164 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0315 21:15:30.254283 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:15:31.493680 11164 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.2393516s)
I0315 21:15:31.494207 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:15:30.5680929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:15:31.494635 11164 start_flags.go:305] no existing cluster config was found, will generate one from the flags
I0315 21:15:31.496393 11164 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0315 21:15:31.499064 11164 out.go:177] * Using Docker Desktop driver with root privileges
I0315 21:15:31.501160 11164 cni.go:84] Creating CNI manager for ""
I0315 21:15:31.501160 11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:15:31.501160 11164 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0315 21:15:31.501160 11164 start_flags.go:319] config:
{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:15:31.504086 11164 out.go:177] * Starting control plane node embed-certs-348900 in cluster embed-certs-348900
I0315 21:15:31.506766 11164 cache.go:120] Beginning downloading kic base image for docker with docker
I0315 21:15:31.510102 11164 out.go:177] * Pulling base image ...
I0315 21:15:31.512871 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:15:31.512871 11164 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon
I0315 21:15:31.513118 11164 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4
I0315 21:15:31.513179 11164 cache.go:57] Caching tarball of preloaded images
I0315 21:15:31.513395 11164 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0315 21:15:31.513395 11164 cache.go:60] Finished verifying existence of preloaded tar for v1.26.2 on docker
I0315 21:15:31.514113 11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
I0315 21:15:31.514113 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json: {Name:mk3060d08febbde2429fe9a2baf8bbeb029a2640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:31.875381 11164 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f in local docker daemon, skipping pull
I0315 21:15:31.875429 11164 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f exists in daemon, skipping load
I0315 21:15:31.875429 11164 cache.go:193] Successfully downloaded all kic artifacts
I0315 21:15:31.875429 11164 start.go:364] acquiring machines lock for embed-certs-348900: {Name:mk2351699223ac71a23a94063928109d9d9f576a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0315 21:15:31.875429 11164 start.go:368] acquired machines lock for "embed-certs-348900" in 0s
I0315 21:15:31.876003 11164 start.go:93] Provisioning new machine with config: &{Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:15:31.876319 11164 start.go:125] createHost starting for "" (driver="docker")
I0315 21:15:31.880060 11164 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0315 21:15:31.880999 11164 start.go:159] libmachine.API.Create for "embed-certs-348900" (driver="docker")
I0315 21:15:31.881063 11164 client.go:168] LocalClient.Create starting
I0315 21:15:31.881279 11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
I0315 21:15:31.881815 11164 main.go:141] libmachine: Decoding PEM data...
I0315 21:15:31.881932 11164 main.go:141] libmachine: Parsing certificate...
I0315 21:15:31.881975 11164 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
I0315 21:15:31.881975 11164 main.go:141] libmachine: Decoding PEM data...
I0315 21:15:31.881975 11164 main.go:141] libmachine: Parsing certificate...
I0315 21:15:31.896077 11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0315 21:15:32.230585 11164 cli_runner.go:211] docker network inspect embed-certs-348900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0315 21:15:32.246557 11164 network_create.go:281] running [docker network inspect embed-certs-348900] to gather additional debugging logs...
I0315 21:15:32.246658 11164 cli_runner.go:164] Run: docker network inspect embed-certs-348900
W0315 21:15:32.585407 11164 cli_runner.go:211] docker network inspect embed-certs-348900 returned with exit code 1
I0315 21:15:32.585485 11164 network_create.go:284] error running [docker network inspect embed-certs-348900]: docker network inspect embed-certs-348900: exit status 1
stdout:
[]
stderr:
Error: No such network: embed-certs-348900
I0315 21:15:32.585531 11164 network_create.go:286] output of [docker network inspect embed-certs-348900]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: embed-certs-348900
** /stderr **
I0315 21:15:32.596667 11164 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0315 21:15:32.951201 11164 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0315 21:15:32.983071 11164 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e77440}
I0315 21:15:32.983153 11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0315 21:15:32.994000 11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
I0315 21:15:29.902489 3304 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0315 21:15:30.410425 3304 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0315 21:15:30.439154 3304 kubeadm.go:322]
I0315 21:15:30.439418 3304 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0315 21:15:30.439418 3304 kubeadm.go:322]
I0315 21:15:30.440591 3304 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0315 21:15:30.440591 3304 kubeadm.go:322]
I0315 21:15:30.440591 3304 kubeadm.go:322] mkdir -p $HOME/.kube
I0315 21:15:30.440591 3304 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0315 21:15:30.440591 3304 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0315 21:15:30.441146 3304 kubeadm.go:322]
I0315 21:15:30.441302 3304 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0315 21:15:30.441302 3304 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0315 21:15:30.441302 3304 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0315 21:15:30.441302 3304 kubeadm.go:322]
I0315 21:15:30.442077 3304 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0315 21:15:30.442368 3304 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0315 21:15:30.442368 3304 kubeadm.go:322]
I0315 21:15:30.442768 3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
I0315 21:15:30.442976 3304 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
I0315 21:15:30.442976 3304 kubeadm.go:322] --control-plane
I0315 21:15:30.442976 3304 kubeadm.go:322]
I0315 21:15:30.442976 3304 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0315 21:15:30.442976 3304 kubeadm.go:322]
I0315 21:15:30.442976 3304 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1vsykl.s1ca43i7aq3le3xp \
I0315 21:15:30.442976 3304 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716
I0315 21:15:30.449019 3304 kubeadm.go:322] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I0315 21:15:30.449255 3304 kubeadm.go:322] [WARNING Swap]: running with swap on is not supported. Please disable swap
I0315 21:15:30.449632 3304 kubeadm.go:322] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 18.09
I0315 21:15:30.449944 3304 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0315 21:15:30.449944 3304 cni.go:84] Creating CNI manager for ""
I0315 21:15:30.449944 3304 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
I0315 21:15:30.449944 3304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:15:30.475844 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:30.480125 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:30.550685 3304 ops.go:34] apiserver oom_adj: -16
I0315 21:15:30.665183 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:30.674974 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:15:30.675152 1332 retry.go:31] will retry after 319.696256ms: state is "Stopped"
I0315 21:15:31.004595 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:31.271800 4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:15:32.105850 4576 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet
I0315 21:15:32.876011 4576 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.26.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.26.2/kubelet: Process exited with status 1
stdout:
stderr:
stat: cannot stat '/var/lib/minikube/binaries/v1.26.2/kubelet': No such file or directory
I0315 21:15:32.876276 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.26.2/kubelet --> /var/lib/minikube/binaries/v1.26.2/kubelet (121268472 bytes)
W0315 21:15:33.333982 11164 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900 returned with exit code 1
W0315 21:15:33.334081 11164 network_create.go:148] failed to create docker network embed-certs-348900 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900: exit status 1
stdout:
stderr:
Error response from daemon: Pool overlaps with other one on this address space
W0315 21:15:33.334145 11164 network_create.go:115] failed to create docker network embed-certs-348900 192.168.58.0/24, will retry: subnet is taken
I0315 21:15:33.379254 11164 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0315 21:15:33.406969 11164 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e10420}
I0315 21:15:33.406969 11164 network_create.go:123] attempt to create docker network embed-certs-348900 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0315 21:15:33.416637 11164 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-348900 embed-certs-348900
I0315 21:15:33.931710 11164 network_create.go:107] docker network embed-certs-348900 192.168.67.0/24 created
I0315 21:15:33.931710 11164 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-348900" container
I0315 21:15:33.961692 11164 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0315 21:15:34.382414 11164 cli_runner.go:164] Run: docker volume create embed-certs-348900 --label name.minikube.sigs.k8s.io=embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true
I0315 21:15:34.716016 11164 oci.go:103] Successfully created a docker volume embed-certs-348900
I0315 21:15:34.727122 11164 cli_runner.go:164] Run: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib
I0315 21:15:34.549401 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=old-k8s-version-103800 minikube.k8s.io/updated_at=2023_03_15T21_15_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (4.0692845s)
I0315 21:15:34.549401 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (4.0735649s)
I0315 21:15:34.575936 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:35.677911 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:36.689764 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:37.173919 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:37.680647 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:38.677808 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:36.012455 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:36.012558 1332 retry.go:31] will retry after 307.806183ms: state is "Stopped"
I0315 21:15:36.332781 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:38.718404 11164 cli_runner.go:217] Completed: docker run --rm --name embed-certs-348900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --entrypoint /usr/bin/test -v embed-certs-348900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -d /var/lib: (3.9912367s)
I0315 21:15:38.718694 11164 oci.go:107] Successfully prepared a docker volume embed-certs-348900
I0315 21:15:38.718763 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:15:38.718763 11164 kic.go:190] Starting extracting preloaded images to volume ...
I0315 21:15:38.735548 11164 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir
I0315 21:15:39.684705 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:40.178045 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.173794 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.681379 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:42.683323 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:43.182131 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:41.339223 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:15:41.339409 1332 retry.go:31] will retry after 386.719795ms: state is "Stopped"
I0315 21:15:41.739620 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.046130 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.046265 1332 retry.go:31] will retry after 731.95405ms: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:15:44.784826 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:15:44.930024 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:608] needs reconfigure: apiserver error: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:15:44.930412 1332 kubeadm.go:1120] stopping kube-system containers ...
I0315 21:15:44.948612 1332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:45.444192 1332 docker.go:456] Stopping containers: [e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0]
I0315 21:15:45.468741 1332 ssh_runner.go:195] Run: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0
I0315 21:15:44.191394 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:45.685532 3304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:15:48.821222 3304 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (3.1356462s)
I0315 21:15:48.821384 3304 kubeadm.go:1073] duration metric: took 18.3714764s to wait for elevateKubeSystemPrivileges.
I0315 21:15:48.821384 3304 kubeadm.go:403] StartCluster complete in 50.2400255s
I0315 21:15:48.821513 3304 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:48.821905 3304 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:15:48.825059 3304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:48.828077 3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:15:48.828077 3304 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:15:48.828879 3304 config.go:182] Loaded profile config "old-k8s-version-103800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
I0315 21:15:48.828800 3304 addons.go:66] Setting storage-provisioner=true in profile "old-k8s-version-103800"
I0315 21:15:48.829118 3304 addons.go:66] Setting default-storageclass=true in profile "old-k8s-version-103800"
I0315 21:15:48.829179 3304 addons.go:228] Setting addon storage-provisioner=true in "old-k8s-version-103800"
I0315 21:15:48.829179 3304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-103800"
I0315 21:15:48.829300 3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
I0315 21:15:48.878494 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:48.879545 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:49.358108 3304 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0315 21:15:50.124359 4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 21:15:50.223962 4576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (449 bytes)
I0315 21:15:50.297658 4576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 21:15:50.374920 4576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
I0315 21:15:50.483223 4576 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0315 21:15:50.503211 4576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:15:50.560996 4576 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000 for IP: 192.168.85.2
I0315 21:15:50.561164 4576 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.561830 4576 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
I0315 21:15:50.562026 4576 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
I0315 21:15:50.562749 4576 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key
I0315 21:15:50.562749 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt with IP's: []
I0315 21:15:49.456354 3304 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:15:49.456907 3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0315 21:15:49.478318 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:49.848514 3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
I0315 21:15:49.872375 3304 addons.go:228] Setting addon default-storageclass=true in "old-k8s-version-103800"
I0315 21:15:49.872629 3304 host.go:66] Checking if "old-k8s-version-103800" exists ...
I0315 21:15:49.901466 3304 cli_runner.go:164] Run: docker container inspect old-k8s-version-103800 --format={{.State.Status}}
I0315 21:15:49.934700 3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1066246s)
I0315 21:15:49.936184 3304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0315 21:15:50.250574 3304 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0315 21:15:50.250698 3304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0315 21:15:50.264810 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:50.363018 3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:15:50.573127 3304 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65315 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\old-k8s-version-103800\id_rsa Username:docker}
I0315 21:15:51.185346 3304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0315 21:15:52.041249 3304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-103800" context rescaled to 1 replicas
I0315 21:15:52.041249 3304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:15:52.050693 3304 out.go:177] * Verifying Kubernetes components...
I0315 21:15:52.068989 3304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:15:52.931105 3304 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.994746s)
I0315 21:15:52.931105 3304 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.1809688s)
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.3586386s)
I0315 21:15:53.543980 3304 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4749945s)
I0315 21:15:53.547333 3304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I0315 21:15:53.551130 3304 addons.go:499] enable addons completed in 4.7230615s: enabled=[storage-provisioner default-storageclass]
I0315 21:15:53.562222 3304 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-103800
I0315 21:15:53.866492 3304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-103800" to be "Ready" ...
I0315 21:15:53.933789 3304 node_ready.go:49] node "old-k8s-version-103800" has status "Ready":"True"
I0315 21:15:53.933928 3304 node_ready.go:38] duration metric: took 67.3813ms waiting for node "old-k8s-version-103800" to be "Ready" ...
I0315 21:15:53.933978 3304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:15:53.954978 3304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
I0315 21:15:55.263337 1332 ssh_runner.go:235] Completed: docker stop e3043962e5ef 6824568445c6 95e8431f8447 1f51fce69c22 c2ad60cad36d 0cb5567e32ab 51f04c53d355 a35da045d30f e92b1a5d6d0c e722cf7eda6b ed67a04efb8e 923853eff8e2 ac037b4a1329 ed570c25cf43 b0affa37d140 e5c85f584ed4 494a4383ddf0 aad97e15cb29 f5a744fc67d3 f03ec5c0e911 6b7373bd3644 d14ab3906f22 689b4ee40db7 c7d2681135fb 3ebfa7ac8c42 5f2ce6a254a2 f48bc2a716a0: (9.7945662s)
I0315 21:15:55.280007 1332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I0315 21:15:50.791437 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt ...
I0315 21:15:50.811528 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.crt: {Name:mk1a7714c10c13a7d5c8fb1098bc038f605ad5c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.813206 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key ...
I0315 21:15:50.813206 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\client.key: {Name:mk6d5b75048bc1f92c0f990335a0e77ae990113c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:50.814115 4576 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c
I0315 21:15:50.814711 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0315 21:15:51.462758 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c ...
I0315 21:15:51.462758 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c: {Name:mkbe5d6759390ded2e92d33f951b55651f871d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.465635 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c ...
I0315 21:15:51.465635 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c: {Name:mkeabc19ce40a151a2335523f300cb2173b405a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.465984 4576 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt
I0315 21:15:51.467767 4576 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key.43b9df8c -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key
I0315 21:15:51.475866 4576 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key
I0315 21:15:51.475866 4576 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt with IP's: []
I0315 21:15:51.587728 4576 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt ...
I0315 21:15:51.587834 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt: {Name:mk7c62a1dda77e6dc05d2537ac317544e81f57a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.589765 4576 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key ...
I0315 21:15:51.589848 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key: {Name:mk8190fc7ddb34a4dc4e27e4845c7aee9bb89866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:15:51.598260 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
W0315 21:15:51.600164 4576 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
I0315 21:15:51.600164 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0315 21:15:51.600164 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0315 21:15:51.600849 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0315 21:15:51.600849 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0315 21:15:51.601444 4576 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
I0315 21:15:51.603533 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 21:15:51.706046 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0315 21:15:51.773521 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 21:15:51.835553 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-470000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0315 21:15:51.896596 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 21:15:51.961384 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0315 21:15:52.020772 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 21:15:52.161594 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0315 21:15:52.223729 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 21:15:52.295451 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
I0315 21:15:52.368796 4576 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
I0315 21:15:52.440447 4576 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 21:15:52.501633 4576 ssh_runner.go:195] Run: openssl version
I0315 21:15:52.539319 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
I0315 21:15:52.596897 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
I0315 21:15:52.617219 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
I0315 21:15:52.634012 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
I0315 21:15:52.676116 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
I0315 21:15:52.732985 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
I0315 21:15:52.795424 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
I0315 21:15:52.811657 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
I0315 21:15:52.824204 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
I0315 21:15:52.868586 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
I0315 21:15:52.920203 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 21:15:52.980456 4576 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:52.999359 4576 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:53.012117 4576 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0315 21:15:53.068045 4576 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0315 21:15:53.097602 4576 kubeadm.go:401] StartCluster: {Name:no-preload-470000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:no-preload-470000 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:15:53.106935 4576 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:15:53.188443 4576 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0315 21:15:53.248153 4576 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:15:53.292225 4576 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0315 21:15:53.310023 4576 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:15:53.350373 4576 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0315 21:15:53.350373 4576 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0315 21:15:53.480709 4576 kubeadm.go:322] W0315 21:15:53.477710 2248 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0315 21:15:53.619484 4576 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0315 21:15:53.941137 4576 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0315 21:15:56.130859 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:15:58.590590 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:15:55.667015 1332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:15:55.884955 1332 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5643 Mar 15 21:13 /etc/kubernetes/admin.conf
-rw------- 1 root root 5657 Mar 15 21:13 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Mar 15 21:14 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5601 Mar 15 21:13 /etc/kubernetes/scheduler.conf
I0315 21:15:55.906317 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0315 21:15:55.970490 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0315 21:15:56.077831 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0315 21:15:56.164837 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.189369 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0315 21:15:56.278633 1332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0315 21:15:56.350783 1332 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I0315 21:15:56.368651 1332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0315 21:15:56.472488 1332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554151 1332 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0315 21:15:56.554288 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:56.838520 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:58.821631 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.9831146s)
I0315 21:15:58.821631 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.241679 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.531884 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:15:59.837145 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:15:59.862394 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:00.562737 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.081569 3304 pod_ready.go:102] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:03.528471 3304 pod_ready.go:92] pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:03.528551 3304 pod_ready.go:81] duration metric: took 9.5735907s waiting for pod "coredns-5644d7b6d9-t9nj9" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.528551 3304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.557031 3304 pod_ready.go:92] pod "kube-proxy-cfcpx" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:03.557086 3304 pod_ready.go:81] duration metric: took 28.5355ms waiting for pod "kube-proxy-cfcpx" in "kube-system" namespace to be "Ready" ...
I0315 21:16:03.557086 3304 pod_ready.go:38] duration metric: took 9.623095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:03.557194 3304 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:16:03.572979 3304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.613975 3304 api_server.go:71] duration metric: took 11.5727472s to wait for apiserver process to appear ...
I0315 21:16:03.613975 3304 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:03.613975 3304 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65314/healthz ...
I0315 21:16:03.643577 3304 api_server.go:278] https://127.0.0.1:65314/healthz returned 200:
ok
I0315 21:16:03.656457 3304 api_server.go:140] control plane version: v1.16.0
I0315 21:16:03.656457 3304 api_server.go:130] duration metric: took 42.4823ms to wait for apiserver health ...
I0315 21:16:03.656537 3304 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:03.667107 3304 system_pods.go:59] 3 kube-system pods found
I0315 21:16:03.667180 3304 system_pods.go:61] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:03.667180 3304 system_pods.go:61] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:03.667180 3304 system_pods.go:61] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:03.667180 3304 system_pods.go:74] duration metric: took 10.5957ms to wait for pod list to return data ...
I0315 21:16:03.667180 3304 default_sa.go:34] waiting for default service account to be created ...
I0315 21:16:03.676892 3304 default_sa.go:45] found service account: "default"
I0315 21:16:03.677053 3304 default_sa.go:55] duration metric: took 9.8734ms for default service account to be created ...
I0315 21:16:03.677104 3304 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 21:16:01.047261 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:01.561853 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.057572 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:02.554491 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.060987 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:03.560744 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.058096 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.574094 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.054883 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:05.558867 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:04.285721 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.285721 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.285721 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.285721 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.285721 3304 retry.go:31] will retry after 219.526595ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:04.529762 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.529762 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.529762 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.529762 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.529762 3304 retry.go:31] will retry after 379.322135ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:04.941567 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:04.941567 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:04.941567 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:04.941567 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:04.941567 3304 retry.go:31] will retry after 439.394592ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:05.410063 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:05.410190 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:05.410190 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:05.410246 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:05.410246 3304 retry.go:31] will retry after 547.53451ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:05.971998 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:05.971998 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:05.971998 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:05.971998 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:05.971998 3304 retry.go:31] will retry after 474.225372ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:06.466534 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:06.466718 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:06.466718 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:06.466718 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:06.466718 3304 retry.go:31] will retry after 680.585019ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:07.175871 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:07.175871 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:07.175871 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:07.175871 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:07.175871 3304 retry.go:31] will retry after 979.191711ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:08.550247 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:08.550247 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:08.550247 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:08.550247 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:08.550247 3304 retry.go:31] will retry after 1.232453731s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:06.064030 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.559451 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:06.836193 1332 api_server.go:71] duration metric: took 6.999061s to wait for apiserver process to appear ...
I0315 21:16:06.836348 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:06.836472 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:06.844702 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.349930 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:07.360047 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": EOF
I0315 21:16:07.852770 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:09.202438 11164 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.26.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-348900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f -I lz4 -xf /preloaded.tar -C /extractDir: (30.466496s)
I0315 21:16:09.202651 11164 kic.go:199] duration metric: took 30.483946 seconds to extract preloaded images to volume
I0315 21:16:09.210313 11164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0315 21:16:10.155940 11164 info.go:266] docker info: {ID:5XVN:YLWI:D57U:VRY6:Z2T2:XT44:UTQY:SUTG:X4EL:3KBQ:R56A:SLJU Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:85 OomKillDisable:true NGoroutines:71 SystemTime:2023-03-15 21:16:09.4164826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:53902323712 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.23 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2456e983eb9e37e47538f59ea18f2043c9a73640 Expected:2456e983eb9e37e47538f59ea18f2043c9a73640} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp
,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.3] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.18] map[Name:sbom Path:C:\Program Files\Docker\cli-plu
gins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.25.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
I0315 21:16:10.165464 11164 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0315 21:16:11.073846 11164 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f
I0315 21:16:12.556246 11164 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-348900 --name embed-certs-348900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-348900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-348900 --network embed-certs-348900 --ip 192.168.67.2 --volume embed-certs-348900:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f: (1.4822642s)
I0315 21:16:12.573402 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Running}}
I0315 21:16:12.899930 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:13.219648 11164 cli_runner.go:164] Run: docker exec embed-certs-348900 stat /var/lib/dpkg/alternatives/iptables
I0315 21:16:09.817018 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:09.817099 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:09.817128 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:09.817171 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:09.817212 3304 retry.go:31] will retry after 1.174345338s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:11.034520 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:11.034666 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:11.034666 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:11.034666 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:11.034865 3304 retry.go:31] will retry after 1.617952037s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:12.678044 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:12.678093 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:12.678161 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:12.678161 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:12.678161 3304 retry.go:31] will retry after 2.664928648s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:12.856341 1332 api_server.go:268] stopped: https://127.0.0.1:65165/healthz: Get "https://127.0.0.1:65165/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0315 21:16:13.355164 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.531052 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0315 21:16:13.531052 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0315 21:16:13.856894 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:13.948093 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:13.948207 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.353756 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.444021 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.444582 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:14.850032 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:14.881729 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:14.881822 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.359619 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.458273 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W0315 21:16:15.458359 1332 api_server.go:102] status: https://127.0.0.1:65165/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I0315 21:16:15.846895 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:15.875897 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:15.909269 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:15.909297 1332 api_server.go:130] duration metric: took 9.0729659s to wait for apiserver health ...
I0315 21:16:15.909353 1332 cni.go:84] Creating CNI manager for ""
I0315 21:16:15.909353 1332 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:15.912744 1332 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 21:16:13.756342 11164 oci.go:144] the created container "embed-certs-348900" has a running status.
I0315 21:16:13.756477 11164 kic.go:221] Creating ssh key for kic: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
I0315 21:16:14.119932 11164 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0315 21:16:14.639346 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:14.940713 11164 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0315 21:16:14.940713 11164 kic_runner.go:114] Args: [docker exec --privileged embed-certs-348900 chown docker:docker /home/docker/.ssh/authorized_keys]
I0315 21:16:15.500441 11164 kic.go:261] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa...
I0315 21:16:16.178648 11164 cli_runner.go:164] Run: docker container inspect embed-certs-348900 --format={{.State.Status}}
I0315 21:16:16.488888 11164 machine.go:88] provisioning docker machine ...
I0315 21:16:16.488888 11164 ubuntu.go:169] provisioning hostname "embed-certs-348900"
I0315 21:16:16.502911 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:16.840113 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:16.856244 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:16.856277 11164 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-348900 && echo "embed-certs-348900" | sudo tee /etc/hostname
I0315 21:16:17.147013 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-348900
I0315 21:16:17.160758 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:17.464133 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:17.465429 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:17.465429 11164 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-348900' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-348900/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-348900' | sudo tee -a /etc/hosts;
fi
fi
I0315 21:16:17.739135 11164 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0315 21:16:17.739135 11164 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
I0315 21:16:17.739135 11164 ubuntu.go:177] setting up certificates
I0315 21:16:17.739135 11164 provision.go:83] configureAuth start
I0315 21:16:17.755889 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:18.035724 11164 provision.go:138] copyHostCerts
I0315 21:16:18.036560 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
I0315 21:16:18.036560 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
I0315 21:16:18.037267 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
I0315 21:16:18.038895 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
I0315 21:16:18.038895 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
I0315 21:16:18.039720 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
I0315 21:16:18.041165 11164 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
I0315 21:16:18.041165 11164 exec_runner.go:207] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
I0315 21:16:18.041925 11164 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
I0315 21:16:18.042745 11164 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.embed-certs-348900 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-348900]
I0315 21:16:15.383021 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:15.383097 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:15.383222 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:15.383222 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:15.383288 3304 retry.go:31] will retry after 2.578717787s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:17.995544 3304 system_pods.go:86] 3 kube-system pods found
I0315 21:16:17.995544 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:17.995544 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:17.995544 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:17.997123 3304 retry.go:31] will retry after 3.689658526s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:15.925415 1332 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 21:16:15.965847 1332 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 21:16:16.079955 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:16.096342 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:16.096342 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0315 21:16:16.096342 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0315 21:16:16.096342 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0315 21:16:16.096342 1332 system_pods.go:74] duration metric: took 16.2168ms to wait for pod list to return data ...
I0315 21:16:16.096342 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:16.105140 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:16.105226 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:16.105269 1332 node_conditions.go:105] duration metric: took 8.8846ms to run NodePressure ...
I0315 21:16:16.105316 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0315 21:16:17.333440 1332 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.2280887s)
I0315 21:16:17.333615 1332 kubeadm.go:769] waiting for restarted kubelet to initialise ...
I0315 21:16:17.354686 1332 kubeadm.go:784] kubelet initialised
I0315 21:16:17.354754 1332 kubeadm.go:785] duration metric: took 21.1391ms waiting for restarted kubelet to initialise ...
I0315 21:16:17.354822 1332 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:17.435085 1332 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:19.521467 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:18.251532 11164 provision.go:172] copyRemoteCerts
I0315 21:16:18.273974 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0315 21:16:18.283506 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:18.570902 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:18.768649 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0315 21:16:18.841686 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
I0315 21:16:18.905617 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0315 21:16:18.967699 11164 provision.go:86] duration metric: configureAuth took 1.2285308s
I0315 21:16:18.967770 11164 ubuntu.go:193] setting minikube options for container-runtime
I0315 21:16:18.968727 11164 config.go:182] Loaded profile config "embed-certs-348900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:18.979877 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:19.285905 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:19.286914 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:19.286979 11164 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0315 21:16:19.567687 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0315 21:16:19.567687 11164 ubuntu.go:71] root file system type: overlay
I0315 21:16:19.567687 11164 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0315 21:16:19.582813 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:19.874162 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:19.875396 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:19.875396 11164 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0315 21:16:20.174872 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0315 21:16:20.188182 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:20.453718 11164 main.go:141] libmachine: Using SSH client type: native
I0315 21:16:20.454944 11164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc8ee60] 0xc91d20 <nil> [] 0s} 127.0.0.1 65481 <nil> <nil>}
I0315 21:16:20.454944 11164 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0315 21:16:22.142486 11164 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2023-02-09 19:46:56.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2023-03-15 21:16:20.152689000 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0315 21:16:22.142486 11164 machine.go:91] provisioned docker machine in 5.6536091s
I0315 21:16:22.142486 11164 client.go:171] LocalClient.Create took 50.2614576s
I0315 21:16:22.142486 11164 start.go:167] duration metric: libmachine.API.Create for "embed-certs-348900" took 50.2615841s
I0315 21:16:22.142486 11164 start.go:300] post-start starting for "embed-certs-348900" (driver="docker")
I0315 21:16:22.142486 11164 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0315 21:16:22.164869 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0315 21:16:22.176134 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:22.457317 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:22.664346 11164 ssh_runner.go:195] Run: cat /etc/os-release
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0315 21:16:22.686266 11164 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0315 21:16:22.686266 11164 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I0315 21:16:22.686266 11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
I0315 21:16:22.686902 11164 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
I0315 21:16:22.688699 11164 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem -> 88122.pem in /etc/ssl/certs
I0315 21:16:22.706595 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0315 21:16:22.738368 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /etc/ssl/certs/88122.pem (1708 bytes)
I0315 21:16:22.808162 11164 start.go:303] post-start completed in 665.6768ms
I0315 21:16:22.820367 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:23.085450 11164 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\config.json ...
I0315 21:16:23.099327 11164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0315 21:16:23.105640 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:21.705945 3304 system_pods.go:86] 4 kube-system pods found
I0315 21:16:21.706010 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:21.706103 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Pending
I0315 21:16:21.706103 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:21.706185 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:21.706219 3304 retry.go:31] will retry after 5.083561084s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:22.006711 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:24.016700 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:23.396840 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:23.581013 11164 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0315 21:16:23.600663 11164 start.go:128] duration metric: createHost completed in 51.7244434s
I0315 21:16:23.600663 11164 start.go:83] releasing machines lock for "embed-certs-348900", held for 51.7253337s
I0315 21:16:23.612591 11164 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-348900
I0315 21:16:23.883432 11164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0315 21:16:23.894275 11164 ssh_runner.go:195] Run: cat /version.json
I0315 21:16:23.894535 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:23.897398 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:24.187980 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:24.211376 11164 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65481 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\embed-certs-348900\id_rsa Username:docker}
I0315 21:16:24.384184 11164 ssh_runner.go:195] Run: systemctl --version
I0315 21:16:24.554870 11164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0315 21:16:24.601965 11164 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
W0315 21:16:24.636442 11164 start.go:407] unable to name loopback interface in dockerConfigureNetworkPlugin: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
stdout:
stderr:
find: '\\etc\\cni\\net.d': No such file or directory
I0315 21:16:24.653193 11164 ssh_runner.go:195] Run: which cri-dockerd
I0315 21:16:24.687918 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0315 21:16:24.720950 11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
I0315 21:16:24.782057 11164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0315 21:16:24.838659 11164 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0315 21:16:24.838782 11164 start.go:485] detecting cgroup driver to use...
I0315 21:16:24.838782 11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:16:24.839372 11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:16:24.907810 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0315 21:16:24.962942 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0315 21:16:24.999607 11164 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
I0315 21:16:25.016372 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0315 21:16:25.084691 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:16:25.123717 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0315 21:16:25.175564 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0315 21:16:25.220146 11164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0315 21:16:25.283915 11164 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0315 21:16:25.334938 11164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0315 21:16:25.388356 11164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0315 21:16:25.435298 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:25.641460 11164 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0315 21:16:25.860833 11164 start.go:485] detecting cgroup driver to use...
I0315 21:16:25.861441 11164 detect.go:196] detected "cgroupfs" cgroup driver on host os
I0315 21:16:25.882735 11164 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0315 21:16:25.939579 11164 cruntime.go:273] skipping containerd shutdown because we are bound to it
I0315 21:16:25.960420 11164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0315 21:16:26.059890 11164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0315 21:16:26.183579 11164 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0315 21:16:26.466649 11164 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0315 21:16:26.677013 11164 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
I0315 21:16:26.677080 11164 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
I0315 21:16:26.756071 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:26.959814 11164 ssh_runner.go:195] Run: sudo systemctl restart docker
I0315 21:16:27.700313 11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:16:27.915578 11164 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0315 21:16:28.148265 11164 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0315 21:16:26.834333 3304 system_pods.go:86] 5 kube-system pods found
I0315 21:16:26.834442 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:26.834494 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
I0315 21:16:26.834494 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:26.834542 3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
I0315 21:16:26.834542 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:26.834542 3304 retry.go:31] will retry after 6.853083205s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:29.227662 4576 kubeadm.go:322] [init] Using Kubernetes version: v1.26.2
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] Running pre-flight checks
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0315 21:16:29.227763 4576 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0315 21:16:29.229013 4576 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0315 21:16:29.233640 4576 out.go:204] - Generating certificates and keys ...
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Using existing ca certificate authority
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
I0315 21:16:29.234315 4576 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
I0315 21:16:29.234862 4576 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
I0315 21:16:29.235050 4576 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
I0315 21:16:29.235155 4576 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
I0315 21:16:29.235331 4576 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
I0315 21:16:29.235774 4576 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
I0315 21:16:29.235871 4576 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
I0315 21:16:29.235871 4576 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-470000] and IPs [192.168.85.2 127.0.0.1 ::1]
I0315 21:16:29.236566 4576 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
I0315 21:16:29.236865 4576 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
I0315 21:16:29.237080 4576 kubeadm.go:322] [certs] Generating "sa" key and public key
I0315 21:16:29.237437 4576 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0315 21:16:29.237659 4576 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
I0315 21:16:29.237841 4576 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0315 21:16:29.238095 4576 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0315 21:16:29.238325 4576 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0315 21:16:29.238639 4576 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0315 21:16:29.238966 4576 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0315 21:16:29.239000 4576 kubeadm.go:322] [kubelet-start] Starting the kubelet
I0315 21:16:29.239299 4576 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0315 21:16:29.244122 4576 out.go:204] - Booting up control plane ...
I0315 21:16:29.244122 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0315 21:16:29.244122 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0315 21:16:29.244875 4576 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0315 21:16:29.245231 4576 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0315 21:16:29.245856 4576 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0315 21:16:29.246514 4576 kubeadm.go:322] [apiclient] All control plane components are healthy after 27.005043 seconds
I0315 21:16:29.247464 4576 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0315 21:16:29.247889 4576 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0315 21:16:29.247889 4576 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
I0315 21:16:29.249317 4576 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-470000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0315 21:16:29.249647 4576 kubeadm.go:322] [bootstrap-token] Using token: g8jwe6.dtydkfj8fkgcjwxk
I0315 21:16:29.253362 4576 out.go:204] - Configuring RBAC rules ...
I0315 21:16:29.253362 4576 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0315 21:16:29.253982 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0315 21:16:29.254534 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0315 21:16:29.254971 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0315 21:16:29.255290 4576 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0315 21:16:29.255767 4576 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0315 21:16:29.256101 4576 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0315 21:16:29.256445 4576 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
I0315 21:16:29.256697 4576 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
I0315 21:16:29.256697 4576 kubeadm.go:322]
I0315 21:16:29.256697 4576 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
I0315 21:16:29.256697 4576 kubeadm.go:322]
I0315 21:16:29.256697 4576 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
I0315 21:16:29.257255 4576 kubeadm.go:322]
I0315 21:16:29.257312 4576 kubeadm.go:322] mkdir -p $HOME/.kube
I0315 21:16:29.257312 4576 kubeadm.go:322] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0315 21:16:29.258206 4576 kubeadm.go:322] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0315 21:16:29.258206 4576 kubeadm.go:322]
I0315 21:16:29.258392 4576 kubeadm.go:322] Alternatively, if you are the root user, you can run:
I0315 21:16:29.258392 4576 kubeadm.go:322]
I0315 21:16:29.258392 4576 kubeadm.go:322] export KUBECONFIG=/etc/kubernetes/admin.conf
I0315 21:16:29.258392 4576 kubeadm.go:322]
I0315 21:16:29.259028 4576 kubeadm.go:322] You should now deploy a pod network to the cluster.
I0315 21:16:29.259028 4576 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0315 21:16:29.259028 4576 kubeadm.go:322] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0315 21:16:29.259586 4576 kubeadm.go:322]
I0315 21:16:29.259793 4576 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
I0315 21:16:29.259793 4576 kubeadm.go:322] and service account keys on each node and then running the following as root:
I0315 21:16:29.259793 4576 kubeadm.go:322]
I0315 21:16:29.260469 4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
I0315 21:16:29.260726 4576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716 \
I0315 21:16:29.260890 4576 kubeadm.go:322] --control-plane
I0315 21:16:29.260890 4576 kubeadm.go:322]
I0315 21:16:29.261169 4576 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
I0315 21:16:29.261228 4576 kubeadm.go:322]
I0315 21:16:29.261412 4576 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g8jwe6.dtydkfj8fkgcjwxk \
I0315 21:16:29.261412 4576 kubeadm.go:322] --discovery-token-ca-cert-hash sha256:bbf210a1ce3ae6ed86699fbddc86294be9a5c7abc143d537001f0a224592f716
I0315 21:16:29.261412 4576 cni.go:84] Creating CNI manager for ""
I0315 21:16:29.261412 4576 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:29.266347 4576 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0315 21:16:28.373729 11164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0315 21:16:28.596843 11164 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0315 21:16:28.641503 11164 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0315 21:16:28.659715 11164 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0315 21:16:28.687449 11164 start.go:553] Will wait 60s for crictl version
I0315 21:16:28.704098 11164 ssh_runner.go:195] Run: which crictl
I0315 21:16:28.753769 11164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0315 21:16:29.076356 11164 start.go:569] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 23.0.1
RuntimeApiVersion: v1alpha2
I0315 21:16:29.092004 11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:16:29.211116 11164 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0315 21:16:26.048179 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:28.050667 1332 pod_ready.go:102] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"False"
I0315 21:16:29.001447 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.001447 1332 pod_ready.go:81] duration metric: took 11.5663842s waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.001447 1332 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.028330 1332 pod_ready.go:81] duration metric: took 26.8832ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.028330 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.057628 1332 pod_ready.go:81] duration metric: took 29.2978ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.057628 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.092004 1332 pod_ready.go:81] duration metric: took 34.3758ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.092004 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131434 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.131486 1332 pod_ready.go:81] duration metric: took 39.482ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.131486 1332 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402295 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:29.402345 1332 pod_ready.go:81] duration metric: took 270.8098ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.402345 1332 pod_ready.go:38] duration metric: took 12.0475003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:29.402386 1332 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:16:29.426130 1332 ops.go:34] apiserver oom_adj: -16
I0315 21:16:29.426187 1332 kubeadm.go:637] restartCluster took 1m4.338895s
I0315 21:16:29.426266 1332 kubeadm.go:403] StartCluster complete in 1m4.4532784s
I0315 21:16:29.426351 1332 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.426601 1332 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:16:29.429857 1332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:29.432982 1332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:16:29.432982 1332 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:16:29.433680 1332 config.go:182] Loaded profile config "pause-073300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:29.438415 1332 out.go:177] * Enabled addons:
I0315 21:16:29.443738 1332 addons.go:499] enable addons completed in 10.8462ms: enabled=[]
I0315 21:16:29.452842 1332 kapi.go:59] client config for pause-073300: &rest.Config{Host:"https://127.0.0.1:65165", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-073300\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1deb720), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0315 21:16:29.467764 1332 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-073300" context rescaled to 1 replicas
I0315 21:16:29.467764 1332 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:16:29.470858 1332 out.go:177] * Verifying Kubernetes components...
I0315 21:16:29.484573 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:29.761590 1332 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
I0315 21:16:29.775423 1332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-073300
I0315 21:16:30.117208 1332 node_ready.go:35] waiting up to 6m0s for node "pause-073300" to be "Ready" ...
I0315 21:16:30.134817 1332 node_ready.go:49] node "pause-073300" has status "Ready":"True"
I0315 21:16:30.134886 1332 node_ready.go:38] duration metric: took 17.4789ms waiting for node "pause-073300" to be "Ready" ...
I0315 21:16:30.135066 1332 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:30.162562 1332 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219441 1332 pod_ready.go:92] pod "coredns-787d4945fb-2q246" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.219583 1332 pod_ready.go:81] duration metric: took 57.0207ms waiting for pod "coredns-787d4945fb-2q246" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.219583 1332 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608418 1332 pod_ready.go:92] pod "etcd-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:30.608458 1332 pod_ready.go:81] duration metric: took 388.876ms waiting for pod "etcd-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:30.608458 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:29.286357 4576 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0315 21:16:29.434851 4576 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
I0315 21:16:29.759117 4576 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0315 21:16:29.777121 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:29.784090 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:29.333720 11164 out.go:204] * Preparing Kubernetes v1.26.2 on Docker 23.0.1 ...
I0315 21:16:29.346161 11164 cli_runner.go:164] Run: docker exec -t embed-certs-348900 dig +short host.docker.internal
I0315 21:16:29.900879 11164 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
I0315 21:16:29.916562 11164 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts
I0315 21:16:29.935552 11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:16:29.995136 11164 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-348900
I0315 21:16:30.338304 11164 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime docker
I0315 21:16:30.350351 11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:16:30.410968 11164 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:16:30.410997 11164 docker.go:560] Images already preloaded, skipping extraction
I0315 21:16:30.423332 11164 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0315 21:16:30.503657 11164 docker.go:630] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/pause:3.9
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0315 21:16:30.503657 11164 cache_images.go:84] Images are preloaded, skipping loading
I0315 21:16:30.514842 11164 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0315 21:16:30.592454 11164 cni.go:84] Creating CNI manager for ""
I0315 21:16:30.593071 11164 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0315 21:16:30.593126 11164 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0315 21:16:30.593164 11164 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-348900 NodeName:embed-certs-348900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
I0315 21:16:30.593164 11164 kubeadm.go:177] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "embed-certs-348900"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.26.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0315 21:16:30.593164 11164 kubeadm.go:968] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.26.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-348900 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
[Install]
config:
{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0315 21:16:30.608458 11164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.2
I0315 21:16:30.650429 11164 binaries.go:44] Found k8s binaries, skipping transfer
I0315 21:16:30.663574 11164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0315 21:16:30.692787 11164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
I0315 21:16:30.740392 11164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0315 21:16:30.785258 11164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
I0315 21:16:30.856683 11164 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0315 21:16:30.874232 11164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0315 21:16:30.910227 11164 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900 for IP: 192.168.67.2
I0315 21:16:30.910227 11164 certs.go:186] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:30.910959 11164 certs.go:195] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
I0315 21:16:30.910959 11164 certs.go:195] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
I0315 21:16:30.912090 11164 certs.go:315] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key
I0315 21:16:30.912245 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt with IP's: []
I0315 21:16:31.176322 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt ...
I0315 21:16:31.176322 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.crt: {Name:mk3adaad25efd04206f4069d51ba11c764eb6365 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.185180 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key ...
I0315 21:16:31.186710 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\client.key: {Name:mkf9f54f56133eba18d6e348fef5a1556121e000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.186988 11164 certs.go:315] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e
I0315 21:16:31.187994 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0315 21:16:31.980645 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e ...
I0315 21:16:31.980645 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e: {Name:mk2261dfadf80693084f767fa62cccae0b07268d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.987167 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e ...
I0315 21:16:31.987167 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e: {Name:mk003ae0b84dcfe7543e40c97ad15121d53cc917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:31.988356 11164 certs.go:333] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt
I0315 21:16:31.999575 11164 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key.c7fa3a9e -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key
I0315 21:16:32.001372 11164 certs.go:315] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key
I0315 21:16:32.001790 11164 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt with IP's: []
I0315 21:16:32.228690 11164 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt ...
I0315 21:16:32.228763 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt: {Name:mk6cbb1c106aa2dec99a9338908a5ea76d5206ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:32.230290 11164 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key ...
I0315 21:16:32.230290 11164 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key: {Name:mk5c3038fe2a59bd4ebdf1cb320d733f3de9b70e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:32.243236 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem (1338 bytes)
W0315 21:16:32.243866 11164 certs.go:397] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812_empty.pem, impossibly tiny 0 bytes
I0315 21:16:32.244089 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
I0315 21:16:32.244671 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0315 21:16:32.245081 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0315 21:16:32.245162 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0315 21:16:32.245850 11164 certs.go:401] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem (1708 bytes)
I0315 21:16:32.248063 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0315 21:16:32.321659 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0315 21:16:32.402505 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0315 21:16:32.491666 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\embed-certs-348900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0315 21:16:32.579600 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0315 21:16:32.651879 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0315 21:16:32.716051 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0315 21:16:32.797235 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0315 21:16:32.885295 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\8812.pem --> /usr/share/ca-certificates/8812.pem (1338 bytes)
I0315 21:16:32.963869 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\88122.pem --> /usr/share/ca-certificates/88122.pem (1708 bytes)
I0315 21:16:33.029503 11164 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0315 21:16:33.108304 11164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0315 21:16:33.169580 11164 ssh_runner.go:195] Run: openssl version
I0315 21:16:33.195467 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0315 21:16:33.230164 11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0315 21:16:31.017074 1332 pod_ready.go:92] pod "kube-apiserver-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.017074 1332 pod_ready.go:81] duration metric: took 408.6175ms waiting for pod "kube-apiserver-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.017074 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:92] pod "kube-controller-manager-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.395349 1332 pod_ready.go:81] duration metric: took 378.275ms waiting for pod "kube-controller-manager-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.395349 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:92] pod "kube-proxy-m4md5" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:31.792495 1332 pod_ready.go:81] duration metric: took 397.1476ms waiting for pod "kube-proxy-m4md5" in "kube-system" namespace to be "Ready" ...
I0315 21:16:31.792495 1332 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.219569 1332 pod_ready.go:92] pod "kube-scheduler-pause-073300" in "kube-system" namespace has status "Ready":"True"
I0315 21:16:32.220120 1332 pod_ready.go:81] duration metric: took 427.0739ms waiting for pod "kube-scheduler-pause-073300" in "kube-system" namespace to be "Ready" ...
I0315 21:16:32.220120 1332 pod_ready.go:38] duration metric: took 2.0850147s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:32.220120 1332 api_server.go:51] waiting for apiserver process to appear ...
I0315 21:16:32.232971 1332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0315 21:16:32.332638 1332 api_server.go:71] duration metric: took 2.8648801s to wait for apiserver process to appear ...
I0315 21:16:32.332638 1332 api_server.go:87] waiting for apiserver healthz status ...
I0315 21:16:32.332638 1332 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:65165/healthz ...
I0315 21:16:32.362918 1332 api_server.go:278] https://127.0.0.1:65165/healthz returned 200:
ok
I0315 21:16:32.430820 1332 api_server.go:140] control plane version: v1.26.2
I0315 21:16:32.430820 1332 api_server.go:130] duration metric: took 98.1819ms to wait for apiserver health ...
I0315 21:16:32.430820 1332 system_pods.go:43] waiting for kube-system pods to appear ...
I0315 21:16:32.455349 1332 system_pods.go:59] 6 kube-system pods found
I0315 21:16:32.455486 1332 system_pods.go:61] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.455486 1332 system_pods.go:61] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.455544 1332 system_pods.go:61] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.455642 1332 system_pods.go:61] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.455685 1332 system_pods.go:61] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.455785 1332 system_pods.go:74] duration metric: took 24.9239ms to wait for pod list to return data ...
I0315 21:16:32.455785 1332 default_sa.go:34] waiting for default service account to be created ...
I0315 21:16:32.637154 1332 default_sa.go:45] found service account: "default"
I0315 21:16:32.637301 1332 default_sa.go:55] duration metric: took 181.4813ms for default service account to be created ...
I0315 21:16:32.637301 1332 system_pods.go:116] waiting for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_pods.go:86] 6 kube-system pods found
I0315 21:16:32.844031 1332 system_pods.go:89] "coredns-787d4945fb-2q246" [13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "etcd-pause-073300" [08b62e5b-2e8e-45a6-976f-51c9524724a0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-apiserver-pause-073300" [f7f5b883-f6de-4ad7-adc7-c48ad03ab3c0] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-controller-manager-pause-073300" [2691065d-e6be-4ff6-902d-6d474453c5e9] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-proxy-m4md5" [428ae579-2b68-4526-a2b0-d8bb5922870f] Running
I0315 21:16:32.844031 1332 system_pods.go:89] "kube-scheduler-pause-073300" [0cdbd626-152a-47fb-a2d9-08d22e639996] Running
I0315 21:16:32.844031 1332 system_pods.go:126] duration metric: took 206.7296ms to wait for k8s-apps to be running ...
I0315 21:16:32.844031 1332 system_svc.go:44] waiting for kubelet service to be running ....
I0315 21:16:32.858698 1332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:32.902525 1332 system_svc.go:56] duration metric: took 56.9493ms WaitForService to wait for kubelet.
I0315 21:16:32.902598 1332 kubeadm.go:578] duration metric: took 3.4348415s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I0315 21:16:32.902669 1332 node_conditions.go:102] verifying NodePressure condition ...
I0315 21:16:33.016156 1332 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki
I0315 21:16:33.016241 1332 node_conditions.go:123] node cpu capacity is 16
I0315 21:16:33.016278 1332 node_conditions.go:105] duration metric: took 113.5716ms to run NodePressure ...
I0315 21:16:33.016316 1332 start.go:228] waiting for startup goroutines ...
I0315 21:16:33.016316 1332 start.go:233] waiting for cluster config update ...
I0315 21:16:33.016351 1332 start.go:242] writing updated cluster config ...
I0315 21:16:33.039378 1332 ssh_runner.go:195] Run: rm -f paused
I0315 21:16:33.289071 1332 start.go:555] kubectl: 1.18.2, cluster: 1.26.2 (minor skew: 8)
I0315 21:16:33.292949 1332 out.go:177]
W0315 21:16:33.295479 1332 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.26.2.
I0315 21:16:33.297706 1332 out.go:177] - Want kubectl v1.26.2? Try 'minikube kubectl -- get pods -A'
I0315 21:16:33.301501 1332 out.go:177] * Done! kubectl is now configured to use "pause-073300" cluster and "default" namespace by default
I0315 21:16:33.717595 3304 system_pods.go:86] 6 kube-system pods found
I0315 21:16:33.717595 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Pending
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:33.717595 3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Pending
I0315 21:16:33.717595 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:33.717595 3304 retry.go:31] will retry after 7.396011667s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
I0315 21:16:31.527682 4576 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.7684448s)
I0315 21:16:31.527682 4576 ops.go:34] apiserver oom_adj: -16
I0315 21:16:31.527682 4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e minikube.k8s.io/name=no-preload-470000 minikube.k8s.io/updated_at=2023_03_15T21_16_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1.7435955s)
I0315 21:16:31.528138 4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1.7509547s)
I0315 21:16:31.546907 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:32.651879 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:33.157563 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:33.663575 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:34.656851 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:35.154601 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:35.655087 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:33.243154 11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 15 19:59 /usr/share/ca-certificates/minikubeCA.pem
I0315 21:16:33.255159 11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0315 21:16:33.304401 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0315 21:16:33.376002 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8812.pem && ln -fs /usr/share/ca-certificates/8812.pem /etc/ssl/certs/8812.pem"
I0315 21:16:33.437551 11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8812.pem
I0315 21:16:33.457067 11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 15 20:10 /usr/share/ca-certificates/8812.pem
I0315 21:16:33.472683 11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8812.pem
I0315 21:16:33.512180 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8812.pem /etc/ssl/certs/51391683.0"
I0315 21:16:33.594619 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88122.pem && ln -fs /usr/share/ca-certificates/88122.pem /etc/ssl/certs/88122.pem"
I0315 21:16:33.678203 11164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88122.pem
I0315 21:16:33.719266 11164 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 15 20:10 /usr/share/ca-certificates/88122.pem
I0315 21:16:33.747747 11164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88122.pem
I0315 21:16:33.801073 11164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/88122.pem /etc/ssl/certs/3ec20f2e.0"
I0315 21:16:33.858449 11164 kubeadm.go:401] StartCluster: {Name:embed-certs-348900 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1678473806-15991@sha256:c7e2010fcc4584b4a079087c1c0a443479e9062a1998351b11de5747bc1c557f Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:embed-certs-348900 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
I0315 21:16:33.875455 11164 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0315 21:16:33.983958 11164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0315 21:16:34.106643 11164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0315 21:16:34.166229 11164 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
I0315 21:16:34.191715 11164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0315 21:16:34.244848 11164 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0315 21:16:34.245009 11164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0315 21:16:34.448526 11164 kubeadm.go:322] W0315 21:16:34.443079 1446 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
I0315 21:16:34.611048 11164 kubeadm.go:322] [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
I0315 21:16:34.911105 11164 kubeadm.go:322] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0315 21:16:36.155261 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:36.653892 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:37.161298 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:37.652599 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:38.154702 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:38.645135 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:39.164169 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:39.658552 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:40.157502 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:41.157705 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:42.163009 4576 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0315 21:16:43.340102 4576 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.26.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.1770953s)
I0315 21:16:43.340102 4576 kubeadm.go:1073] duration metric: took 13.5808883s to wait for elevateKubeSystemPrivileges.
I0315 21:16:43.340102 4576 kubeadm.go:403] StartCluster complete in 50.2426818s
I0315 21:16:43.340102 4576 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:43.341117 4576 settings.go:150] Updating kubeconfig: C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
I0315 21:16:43.344404 4576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0315 21:16:43.346496 4576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0315 21:16:43.346496 4576 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
I0315 21:16:43.346496 4576 addons.go:66] Setting storage-provisioner=true in profile "no-preload-470000"
I0315 21:16:43.347047 4576 addons.go:228] Setting addon storage-provisioner=true in "no-preload-470000"
I0315 21:16:43.347047 4576 addons.go:66] Setting default-storageclass=true in profile "no-preload-470000"
I0315 21:16:43.347047 4576 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-470000"
I0315 21:16:43.347221 4576 host.go:66] Checking if "no-preload-470000" exists ...
I0315 21:16:43.347249 4576 config.go:182] Loaded profile config "no-preload-470000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.2
I0315 21:16:43.379427 4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
I0315 21:16:43.381213 4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
I0315 21:16:43.745668 4576 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0315 21:16:41.133563 3304 system_pods.go:86] 7 kube-system pods found
I0315 21:16:41.133563 3304 system_pods.go:89] "coredns-5644d7b6d9-t9nj9" [7c081b28-446f-472d-a63a-60f7c6bac420] Running
I0315 21:16:41.133686 3304 system_pods.go:89] "etcd-old-k8s-version-103800" [177eccf1-ef20-41f5-9031-eca4485bea7b] Running
I0315 21:16:41.133686 3304 system_pods.go:89] "kube-apiserver-old-k8s-version-103800" [2bad5a6b-39e8-46ef-8bd8-d1571bdfb33d] Pending
I0315 21:16:41.133686 3304 system_pods.go:89] "kube-controller-manager-old-k8s-version-103800" [eaf30ba4-8812-46a0-a046-aa376656a6eb] Running
I0315 21:16:41.133781 3304 system_pods.go:89] "kube-proxy-cfcpx" [c26f229d-21c9-4f80-83cd-a48b495d28b5] Running
I0315 21:16:41.133781 3304 system_pods.go:89] "kube-scheduler-old-k8s-version-103800" [2c673315-0d1e-4a5d-a5d7-738e38d7cf84] Running
I0315 21:16:41.133781 3304 system_pods.go:89] "storage-provisioner" [d2706a33-a440-4f8c-8449-93f29f7f37bd] Running
I0315 21:16:41.133781 3304 retry.go:31] will retry after 8.389208299s: missing components: kube-apiserver
I0315 21:16:43.747702 4576 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:16:43.748309 4576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0315 21:16:43.757168 4576 addons.go:228] Setting addon default-storageclass=true in "no-preload-470000"
I0315 21:16:43.757927 4576 host.go:66] Checking if "no-preload-470000" exists ...
I0315 21:16:43.767770 4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-470000
I0315 21:16:43.791366 4576 cli_runner.go:164] Run: docker container inspect no-preload-470000 --format={{.State.Status}}
I0315 21:16:44.188975 4576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65272 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\no-preload-470000\id_rsa Username:docker}
I0315 21:16:44.217518 4576 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
I0315 21:16:44.217590 4576 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0315 21:16:44.244690 4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-470000
I0315 21:16:44.434385 4576 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-470000" context rescaled to 1 replicas
I0315 21:16:44.434385 4576 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.26.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0315 21:16:44.439077 4576 out.go:177] * Verifying Kubernetes components...
I0315 21:16:44.472892 4576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0315 21:16:44.602535 4576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65272 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\no-preload-470000\id_rsa Username:docker}
I0315 21:16:44.646754 4576 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.3002612s)
I0315 21:16:44.647304 4576 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.26.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0315 21:16:44.663208 4576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-470000
I0315 21:16:44.860417 4576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0315 21:16:45.012884 4576 node_ready.go:35] waiting up to 6m0s for node "no-preload-470000" to be "Ready" ...
I0315 21:16:45.053358 4576 node_ready.go:49] node "no-preload-470000" has status "Ready":"True"
I0315 21:16:45.053358 4576 node_ready.go:38] duration metric: took 40.4219ms waiting for node "no-preload-470000" to be "Ready" ...
I0315 21:16:45.053358 4576 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0315 21:16:45.161915 4576 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-vlgxh" in "kube-system" namespace to be "Ready" ...
I0315 21:16:45.558149 4576 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
*
* ==> Docker <==
* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:48 UTC. --
Mar 15 21:15:16 pause-073300 dockerd[5130]: time="2023-03-15T21:15:16.627341500Z" level=info msg="Loading containers: start."
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.180814100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.293764400Z" level=info msg="Loading containers: done."
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403670900Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403801700Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403820400Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403829500Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.403946800Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.404077100Z" level=info msg="Daemon has completed initialization"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.495876500Z" level=info msg="[core] [Server #7] Server created" module=grpc
Mar 15 21:15:17 pause-073300 systemd[1]: Started Docker Application Container Engine.
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.517552200Z" level=info msg="API listen on [::]:2376"
Mar 15 21:15:17 pause-073300 dockerd[5130]: time="2023-03-15T21:15:17.543627500Z" level=info msg="API listen on /var/run/docker.sock"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744692100Z" level=info msg="ignoring event" container=923853eff8e2f1864e6cfeaaffa94363f41b1b6d4244613c11e443d63b83f2f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.744884600Z" level=info msg="ignoring event" container=51f04c53d355992b4720b6fe3fb08eeebaffdc34d08262d17db9f24dc486c5f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.839172700Z" level=info msg="ignoring event" container=c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.840438600Z" level=info msg="ignoring event" container=e92b1a5d6d0c83422026888e04b4103fbb1a6aad2a814bd916a79bec7e5cb8d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.853642900Z" level=info msg="ignoring event" container=a35da045d30f2532ff1a5d88e989615ddf33df4f90272696757ca1b38c1a5eba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927068700Z" level=info msg="ignoring event" container=ed67a04efb8ec818ab6782a05f9c291801a4458a1a0233c184aaf80f6bd8a373 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:46 pause-073300 dockerd[5130]: time="2023-03-15T21:15:46.927810400Z" level=info msg="ignoring event" container=95e8431f84471d1685f5d908a022789eb2644a61f5292997dfe306c1e9821c27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.033930300Z" level=info msg="ignoring event" container=e722cf7eda6bbc9bcf453efc486e10336872ccd7d74dbeb91e51085c094b0009 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.128698500Z" level=info msg="ignoring event" container=1f51fce69c226f17529256ccf645edbf972854fc5f36bf524dd8bb1a98d65d9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:47 pause-073300 dockerd[5130]: time="2023-03-15T21:15:47.434269500Z" level=info msg="ignoring event" container=6824568445c66b1f085e714f1a98df4ca1f40f4f7f67ed8f6069fbde15fd4b87 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:51 pause-073300 dockerd[5130]: time="2023-03-15T21:15:51.189996200Z" level=info msg="ignoring event" container=e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 15 21:15:55 pause-073300 dockerd[5130]: time="2023-03-15T21:15:55.079374900Z" level=info msg="ignoring event" container=0cb5567e32abb23418b668dfb851f2300e7fd6400791daeca39d46d8cf78cb16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c3986aec6e000 5185b96f0becf 22 seconds ago Running coredns 2 a5bac8046c295
b7b4669a56d5c 6f64e7135a6ec 24 seconds ago Running kube-proxy 2 0e90c4b9c88b9
aba41f11fdc83 fce326961ae2d 48 seconds ago Running etcd 2 f6e4108617808
571d485669178 db8f409d9a5d7 48 seconds ago Running kube-scheduler 2 cc13660f35478
e6bb3d9a35ff0 240e201d5b0d8 48 seconds ago Running kube-controller-manager 3 c468745ca2cf5
88f9444587356 63d3239c3c159 48 seconds ago Running kube-apiserver 3 5496303bf33fe
e3043962e5ef5 5185b96f0becf About a minute ago Exited coredns 1 51f04c53d3559
6824568445c66 fce326961ae2d About a minute ago Exited etcd 1 a35da045d30f2
95e8431f84471 db8f409d9a5d7 About a minute ago Exited kube-scheduler 1 923853eff8e2f
1f51fce69c226 240e201d5b0d8 About a minute ago Exited kube-controller-manager 2 e722cf7eda6bb
c2ad60cad36db 6f64e7135a6ec About a minute ago Exited kube-proxy 1 e92b1a5d6d0c8
0cb5567e32abb 63d3239c3c159 About a minute ago Exited kube-apiserver 2 ed67a04efb8ec
*
* ==> coredns [c3986aec6e00] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:39857 - 53557 "HINFO IN 4117550418294164078.6192551117797702913. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0986876s
*
* ==> coredns [e3043962e5ef] <==
* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
.:53
[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
[INFO] 127.0.0.1:58165 - 40858 "HINFO IN 6114658028450402923.1632777775304523244. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0560197s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
*
* ==> describe nodes <==
* Name: pause-073300
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-073300
kubernetes.io/os=linux
minikube.k8s.io/commit=11fd2e5d7d4b8360c6d8a8b2c2e61a071aa8631e
minikube.k8s.io/name=pause-073300
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2023_03_15T21_14_05_0700
minikube.k8s.io/version=v1.29.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 15 Mar 2023 21:13:54 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-073300
AcquireTime: <unset>
RenewTime: Wed, 15 Mar 2023 21:16:44 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:13:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 15 Mar 2023 21:16:13 +0000 Wed, 15 Mar 2023 21:14:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.103.2
Hostname: pause-073300
Capacity:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
Allocatable:
cpu: 16
ephemeral-storage: 263174212Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 52638988Ki
pods: 110
System Info:
Machine ID: b1932dc991aa41bd806e459062926d45
System UUID: b1932dc991aa41bd806e459062926d45
Boot ID: c49fbee3-0cdd-49eb-8984-31df821a263f
Kernel Version: 5.10.102.1-microsoft-standard-WSL2
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://23.0.1
Kubelet Version: v1.26.2
Kube-Proxy Version: v1.26.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-787d4945fb-2q246 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 2m32s
kube-system etcd-pause-073300 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 2m49s
kube-system kube-apiserver-pause-073300 250m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m49s
kube-system kube-controller-manager-pause-073300 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m50s
kube-system kube-proxy-m4md5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m32s
kube-system kube-scheduler-pause-073300 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m41s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (4%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m24s kube-proxy
Normal Starting 23s kube-proxy
Normal NodeHasSufficientPID 3m19s (x7 over 3m20s) kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 3m19s (x8 over 3m20s) kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 3m19s (x8 over 3m20s) kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal Starting 2m44s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m44s kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m44s kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m44s kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeNotReady 2m43s kubelet Node pause-073300 status is now: NodeNotReady
Normal NodeReady 2m42s kubelet Node pause-073300 status is now: NodeReady
Normal NodeAllocatableEnforced 2m42s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 2m33s node-controller Node pause-073300 event: Registered Node pause-073300 in Controller
Normal Starting 50s kubelet Starting kubelet.
Normal NodeHasSufficientPID 49s (x7 over 49s) kubelet Node pause-073300 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 49s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 48s (x8 over 49s) kubelet Node pause-073300 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48s (x8 over 49s) kubelet Node pause-073300 status is now: NodeHasNoDiskPressure
Normal RegisteredNode 21s node-controller Node pause-073300 event: Registered Node pause-073300 in Controller
*
* ==> dmesg <==
* [Mar15 20:45] WSL2: Performing memory compaction.
[Mar15 20:47] WSL2: Performing memory compaction.
[Mar15 20:48] WSL2: Performing memory compaction.
[Mar15 20:49] WSL2: Performing memory compaction.
[Mar15 20:51] WSL2: Performing memory compaction.
[Mar15 20:52] WSL2: Performing memory compaction.
[Mar15 20:53] WSL2: Performing memory compaction.
[Mar15 20:54] WSL2: Performing memory compaction.
[Mar15 20:56] WSL2: Performing memory compaction.
[Mar15 20:57] WSL2: Performing memory compaction.
[Mar15 20:58] WSL2: Performing memory compaction.
[Mar15 20:59] WSL2: Performing memory compaction.
[Mar15 21:00] WSL2: Performing memory compaction.
[Mar15 21:01] WSL2: Performing memory compaction.
[Mar15 21:03] WSL2: Performing memory compaction.
[ +24.007152] overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[Mar15 21:04] process 'docker/tmp/qemu-check145175011/check' started with executable stack
[ +21.555954] WSL2: Performing memory compaction.
[Mar15 21:06] WSL2: Performing memory compaction.
[Mar15 21:07] hrtimer: interrupt took 920300 ns
[Mar15 21:09] WSL2: Performing memory compaction.
[Mar15 21:11] WSL2: Performing memory compaction.
[Mar15 21:12] WSL2: Performing memory compaction.
[Mar15 21:13] WSL2: Performing memory compaction.
[Mar15 21:15] WSL2: Performing memory compaction.
*
* ==> etcd [6824568445c6] <==
* {"level":"info","ts":"2023-03-15T21:15:44.027Z","caller":"traceutil/trace.go:171","msg":"trace[2137636385] transaction","detail":"{read_only:false; number_of_response:1; response_revision:415; }","duration":"100.9434ms","start":"2023-03-15T21:15:43.926Z","end":"2023-03-15T21:15:44.027Z","steps":["trace[2137636385] 'process raft request' (duration: 100.5034ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.540Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"106.6806ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873768454336989569 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" mod_revision:412 > success:<request_put:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" value_size:582 >> failure:<request_range:<key:\"/registry/leases/kube-system/kube-apiserver-nzsucgtdly32izejp7ytxjrkii\" > >>","response":"size:16"}
{"level":"info","ts":"2023-03-15T21:15:44.541Z","caller":"traceutil/trace.go:171","msg":"trace[493093877] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"109.7451ms","start":"2023-03-15T21:15:44.431Z","end":"2023-03-15T21:15:44.541Z","steps":["trace[493093877] 'process raft request' (duration: 109.4963ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[455019656] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"198.5194ms","start":"2023-03-15T21:15:44.343Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[455019656] 'process raft request' (duration: 87.4089ms)","trace[455019656] 'compare' (duration: 106.3186ms)"],"step_count":2}
{"level":"info","ts":"2023-03-15T21:15:44.542Z","caller":"traceutil/trace.go:171","msg":"trace[1743432337] linearizableReadLoop","detail":"{readStateIndex:444; appliedIndex:443; }","duration":"112.8874ms","start":"2023-03-15T21:15:44.430Z","end":"2023-03-15T21:15:44.542Z","steps":["trace[1743432337] 'read index received' (duration: 852.1µs)","trace[1743432337] 'applied index is now lower than readState.Index' (duration: 112.0303ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-15T21:15:44.544Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.3137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-node-lease\" ","response":"range_response_count:1 size:363"}
{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[83833859] range","detail":"{range_begin:/registry/namespaces/kube-node-lease; range_end:; response_count:1; response_revision:418; }","duration":"115.03ms","start":"2023-03-15T21:15:44.429Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[83833859] 'agreement among raft nodes before linearized reading' (duration: 113.2035ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.545Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"111.3129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:116"}
{"level":"info","ts":"2023-03-15T21:15:44.545Z","caller":"traceutil/trace.go:171","msg":"trace[1382087029] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:418; }","duration":"111.3651ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.545Z","steps":["trace[1382087029] 'agreement among raft nodes before linearized reading' (duration: 111.2411ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.547Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"113.4412ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:118"}
{"level":"info","ts":"2023-03-15T21:15:44.547Z","caller":"traceutil/trace.go:171","msg":"trace[1257507815] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:418; }","duration":"113.4898ms","start":"2023-03-15T21:15:44.434Z","end":"2023-03-15T21:15:44.547Z","steps":["trace[1257507815] 'agreement among raft nodes before linearized reading' (duration: 113.3486ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[1166219815] linearizableReadLoop","detail":"{readStateIndex:447; appliedIndex:446; }","duration":"121.4317ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[1166219815] 'read index received' (duration: 3.7558ms)","trace[1166219815] 'applied index is now lower than readState.Index' (duration: 117.6698ms)"],"step_count":2}
{"level":"info","ts":"2023-03-15T21:15:44.956Z","caller":"traceutil/trace.go:171","msg":"trace[513205189] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"125.7589ms","start":"2023-03-15T21:15:44.830Z","end":"2023-03-15T21:15:44.956Z","steps":["trace[513205189] 'process raft request' (duration: 94.9828ms)","trace[513205189] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/pods/kube-system/kube-proxy-m4md5; req_size:4522; } (duration: 27.8113ms)"],"step_count":2}
{"level":"warn","ts":"2023-03-15T21:15:44.957Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"121.7804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
{"level":"info","ts":"2023-03-15T21:15:44.958Z","caller":"traceutil/trace.go:171","msg":"trace[1937091289] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:421; }","duration":"123.6279ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.958Z","steps":["trace[1937091289] 'agreement among raft nodes before linearized reading' (duration: 121.5636ms)"],"step_count":1}
{"level":"warn","ts":"2023-03-15T21:15:44.965Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"129.7433ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:64 size:57899"}
{"level":"info","ts":"2023-03-15T21:15:44.965Z","caller":"traceutil/trace.go:171","msg":"trace[1243225417] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:64; response_revision:421; }","duration":"129.8213ms","start":"2023-03-15T21:15:44.835Z","end":"2023-03-15T21:15:44.965Z","steps":["trace[1243225417] 'agreement among raft nodes before linearized reading' (duration: 123.3525ms)"],"step_count":1}
{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
{"level":"info","ts":"2023-03-15T21:15:46.132Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
WARNING: 2023/03/15 21:15:46 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
{"level":"info","ts":"2023-03-15T21:15:46.436Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
{"level":"info","ts":"2023-03-15T21:15:46.534Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:15:46.538Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-073300","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
*
* ==> etcd [aba41f11fdc8] <==
* {"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:16:07.246Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.103.2:2380"}
{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2023-03-15T21:16:07.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2023-03-15T21:16:07.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:07.326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 3"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 4"}
{"level":"info","ts":"2023-03-15T21:16:09.138Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:pause-073300 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
{"level":"info","ts":"2023-03-15T21:16:09.139Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
{"level":"info","ts":"2023-03-15T21:16:09.143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2023-03-15T21:16:09.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2023-03-15T21:16:09.147Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.103.2:2379"}
*
* ==> kernel <==
* 21:16:49 up 1:24, 0 users, load average: 15.02, 10.62, 6.77
Linux pause-073300 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [0cb5567e32ab] <==
* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:54.852783 1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:55.001054 1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
W0315 21:15:55.018769 1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {
"Addr": "127.0.0.1:2379",
"ServerName": "127.0.0.1",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
*
* ==> kube-apiserver [88f944458735] <==
* I0315 21:16:13.321430 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0315 21:16:13.321719 1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
I0315 21:16:13.322696 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0315 21:16:13.320670 1 crd_finalizer.go:266] Starting CRDFinalizer
I0315 21:16:13.320231 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0315 21:16:13.324414 1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
I0315 21:16:13.437824 1 shared_informer.go:280] Caches are synced for configmaps
I0315 21:16:13.525354 1 shared_informer.go:280] Caches are synced for crd-autoregister
I0315 21:16:13.623881 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0315 21:16:13.624222 1 apf_controller.go:366] Running API Priority and Fairness config worker
I0315 21:16:13.624252 1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
I0315 21:16:13.624258 1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
I0315 21:16:13.624333 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0315 21:16:13.625322 1 shared_informer.go:280] Caches are synced for node_authorizer
I0315 21:16:13.625384 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0315 21:16:13.625410 1 cache.go:39] Caches are synced for autoregister controller
I0315 21:16:13.630897 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
I0315 21:16:14.357698 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I0315 21:16:16.572417 1 controller.go:615] quota admission added evaluator for: serviceaccounts
I0315 21:16:16.602561 1 controller.go:615] quota admission added evaluator for: deployments.apps
I0315 21:16:16.951884 1 controller.go:615] quota admission added evaluator for: daemonsets.apps
I0315 21:16:17.136478 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0315 21:16:17.246459 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0315 21:16:28.244694 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0315 21:16:28.342519 1 controller.go:615] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager [1f51fce69c22] <==
* I0315 21:15:33.346996 1 serving.go:348] Generated self-signed cert in-memory
I0315 21:15:39.060876 1 controllermanager.go:182] Version: v1.26.2
I0315 21:15:39.061047 1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:15:39.072013 1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0315 21:15:39.072120 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I0315 21:15:39.072625 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:15:39.072677 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
*
* ==> kube-controller-manager [e6bb3d9a35ff] <==
* I0315 21:16:28.124587 1 shared_informer.go:280] Caches are synced for cidrallocator
I0315 21:16:28.124592 1 shared_informer.go:280] Caches are synced for crt configmap
I0315 21:16:28.124598 1 shared_informer.go:280] Caches are synced for endpoint
I0315 21:16:28.124661 1 shared_informer.go:280] Caches are synced for HPA
I0315 21:16:28.124898 1 shared_informer.go:280] Caches are synced for GC
I0315 21:16:28.124186 1 shared_informer.go:280] Caches are synced for taint
I0315 21:16:28.125247 1 taint_manager.go:206] "Starting NoExecuteTaintManager"
I0315 21:16:28.125313 1 taint_manager.go:211] "Sending events to api server"
I0315 21:16:28.125358 1 node_lifecycle_controller.go:1438] Initializing eviction metric for zone:
W0315 21:16:28.125464 1 node_lifecycle_controller.go:1053] Missing timestamp for Node pause-073300. Assuming now as a timestamp.
I0315 21:16:28.125524 1 node_lifecycle_controller.go:1254] Controller detected that zone is now in state Normal.
I0315 21:16:28.126198 1 event.go:294] "Event occurred" object="pause-073300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-073300 event: Registered Node pause-073300 in Controller"
I0315 21:16:28.126964 1 shared_informer.go:280] Caches are synced for stateful set
I0315 21:16:28.227084 1 shared_informer.go:280] Caches are synced for namespace
I0315 21:16:28.227137 1 shared_informer.go:280] Caches are synced for disruption
I0315 21:16:28.227298 1 shared_informer.go:280] Caches are synced for deployment
I0315 21:16:28.227547 1 shared_informer.go:280] Caches are synced for ReplicaSet
I0315 21:16:28.227631 1 shared_informer.go:280] Caches are synced for service account
I0315 21:16:28.229520 1 shared_informer.go:273] Waiting for caches to sync for garbage collector
I0315 21:16:28.233560 1 shared_informer.go:280] Caches are synced for resource quota
I0315 21:16:28.236781 1 shared_informer.go:280] Caches are synced for resource quota
I0315 21:16:28.529112 1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
I0315 21:16:28.534472 1 shared_informer.go:280] Caches are synced for garbage collector
I0315 21:16:28.562852 1 shared_informer.go:280] Caches are synced for garbage collector
I0315 21:16:28.562973 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [b7b4669a56d5] <==
* I0315 21:16:25.942265 1 node.go:163] Successfully retrieved node IP: 192.168.103.2
I0315 21:16:25.944192 1 server_others.go:109] "Detected node IP" address="192.168.103.2"
I0315 21:16:25.944360 1 server_others.go:535] "Using iptables proxy"
I0315 21:16:26.134212 1 server_others.go:176] "Using iptables Proxier"
I0315 21:16:26.134360 1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I0315 21:16:26.134376 1 server_others.go:184] "Creating dualStackProxier for iptables"
I0315 21:16:26.134395 1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I0315 21:16:26.134427 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0315 21:16:26.135408 1 server.go:655] "Version info" version="v1.26.2"
I0315 21:16:26.135540 1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:16:26.136322 1 config.go:317] "Starting service config controller"
I0315 21:16:26.136477 1 shared_informer.go:273] Waiting for caches to sync for service config
I0315 21:16:26.136504 1 config.go:226] "Starting endpoint slice config controller"
I0315 21:16:26.136526 1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
I0315 21:16:26.136357 1 config.go:444] "Starting node config controller"
I0315 21:16:26.137498 1 shared_informer.go:273] Waiting for caches to sync for node config
I0315 21:16:26.236790 1 shared_informer.go:280] Caches are synced for endpoint slice config
I0315 21:16:26.238214 1 shared_informer.go:280] Caches are synced for node config
I0315 21:16:26.238275 1 shared_informer.go:280] Caches are synced for service config
*
* ==> kube-proxy [c2ad60cad36d] <==
* E0315 21:15:29.627155 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
E0315 21:15:30.826046 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": dial tcp 192.168.103.2:8443: connect: connection refused
E0315 21:15:43.235847 1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-073300": net/http: TLS handshake timeout
*
* ==> kube-scheduler [571d48566917] <==
* I0315 21:16:07.677853 1 serving.go:348] Generated self-signed cert in-memory
I0315 21:16:13.656832 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
I0315 21:16:13.656978 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:16:13.756221 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0315 21:16:13.756343 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0315 21:16:13.758353 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0315 21:16:13.758370 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:16:13.759778 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0315 21:16:13.759904 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:16:13.757625 1 shared_informer.go:273] Waiting for caches to sync for RequestHeaderAuthRequestController
I0315 21:16:13.758377 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0315 21:16:13.924166 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0315 21:16:13.924382 1 shared_informer.go:280] Caches are synced for RequestHeaderAuthRequestController
I0315 21:16:13.924585 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kube-scheduler [95e8431f8447] <==
* I0315 21:15:34.052612 1 serving.go:348] Generated self-signed cert in-memory
W0315 21:15:44.136305 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0315 21:15:44.140386 1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0315 21:15:44.225673 1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
W0315 21:15:44.225720 1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0315 21:15:44.445561 1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.2"
I0315 21:15:44.445741 1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0315 21:15:44.453477 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I0315 21:15:44.455841 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0315 21:15:44.456010 1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:15:44.456059 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0315 21:15:44.925804 1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0315 21:15:46.348879 1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
I0315 21:15:46.350010 1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
E0315 21:15:46.352703 1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
I0315 21:15:46.355076 1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E0315 21:15:46.355314 1 run.go:74] "command failed" err="finished without leader elect"
*
* ==> kubelet <==
* -- Logs begin at Wed 2023-03-15 21:13:03 UTC, end at Wed 2023-03-15 21:16:49 UTC. --
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.766354 7548 kubelet_node_status.go:73] "Successfully registered node" node="pause-073300"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.826254 7548 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.828960 7548 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.830844 7548 apiserver.go:52] "Watching apiserver"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846713 7548 topology_manager.go:210] "Topology Admit Handler"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.846988 7548 topology_manager.go:210] "Topology Admit Handler"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.925554 7548 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944245 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-proxy\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944547 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-xtables-lock\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944610 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428ae579-2b68-4526-a2b0-d8bb5922870f-lib-modules\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.944669 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vbb\" (UniqueName: \"kubernetes.io/projected/428ae579-2b68-4526-a2b0-d8bb5922870f-kube-api-access-b7vbb\") pod \"kube-proxy-m4md5\" (UID: \"428ae579-2b68-4526-a2b0-d8bb5922870f\") " pod="kube-system/kube-proxy-m4md5"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945094 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-config-volume\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945520 7548 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbnj9\" (UniqueName: \"kubernetes.io/projected/13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc-kube-api-access-mbnj9\") pod \"coredns-787d4945fb-2q246\" (UID: \"13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc\") " pod="kube-system/coredns-787d4945fb-2q246"
Mar 15 21:16:13 pause-073300 kubelet[7548]: I0315 21:16:13.945563 7548 reconciler.go:41] "Reconciler: start to sync state"
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.149192 7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.150324 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154149 7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154342 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.154347 7548 kuberuntime_manager.go:872] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.26.2,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vol
umeMount{Name:kube-api-access-b7vbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-m4md5_kube-system(428ae579-2b68-4526-a2b0-d8bb5922870f): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.155707 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-m4md5" podUID=428ae579-2b68-4526-a2b0-d8bb5922870f
Mar 15 21:16:14 pause-073300 kubelet[7548]: I0315 21:16:14.763783 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768377 7548 kuberuntime_manager.go:872] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.9.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mbnj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropaga
tion:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},S
tdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-787d4945fb-2q246_kube-system(13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
Mar 15 21:16:14 pause-073300 kubelet[7548]: E0315 21:16:14.768684 7548 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-787d4945fb-2q246" podUID=13663f7e-7d6f-41a7-a0e4-a7a0f0eab4cc
Mar 15 21:16:25 pause-073300 kubelet[7548]: I0315 21:16:25.248648 7548 scope.go:115] "RemoveContainer" containerID="c2ad60cad36db8cde30e0a93c9255fa18e5df353a41dd6259afeb2043222ac62"
Mar 15 21:16:27 pause-073300 kubelet[7548]: I0315 21:16:27.246680 7548 scope.go:115] "RemoveContainer" containerID="e3043962e5ef540d703084ce9ddfc5f027eaab5ffceeeadfdff71e94f0eee0ce"
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-073300 -n pause-073300: (2.5376001s)
helpers_test.go:261: (dbg) Run: kubectl --context pause-073300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (134.75s)