=== RUN TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run: out/minikube-linux-amd64 start -p pause-171530 --alsologtostderr -v=1 --driver=docker --container-runtime=docker
E1107 17:17:06.410751 10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
E1107 17:17:16.651759 10129 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/skaffold-171112/client.crt: no such file or directory
=== CONT TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-171530 --alsologtostderr -v=1 --driver=docker --container-runtime=docker: (46.559158423s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got:
-- stdout --
* [pause-171530] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=15310
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
* Using the docker driver based on existing profile
* Starting control plane node pause-171530 in cluster pause-171530
* Pulling base image ...
* Updating the running docker "pause-171530" container ...
* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
-- /stdout --
** stderr **
I1107 17:17:05.571294 265599 out.go:296] Setting OutFile to fd 1 ...
I1107 17:17:05.571401 265599 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:05.571412 265599 out.go:309] Setting ErrFile to fd 2...
I1107 17:17:05.571416 265599 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:05.571524 265599 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
I1107 17:17:05.572110 265599 out.go:303] Setting JSON to false
I1107 17:17:05.573931 265599 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3577,"bootTime":1667837849,"procs":1072,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1107 17:17:05.573997 265599 start.go:126] virtualization: kvm guest
I1107 17:17:05.576735 265599 out.go:177] * [pause-171530] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I1107 17:17:05.578493 265599 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:17:05.578464 265599 notify.go:220] Checking for updates...
I1107 17:17:05.579960 265599 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:17:05.581495 265599 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:05.583272 265599 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
I1107 17:17:05.584752 265599 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1107 17:17:05.586943 265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:05.587399 265599 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:17:05.619996 265599 docker.go:137] docker version: linux-20.10.21
I1107 17:17:05.620105 265599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:05.724537 265599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-11-07 17:17:05.642826795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:05.724669 265599 docker.go:254] overlay module found
I1107 17:17:05.726899 265599 out.go:177] * Using the docker driver based on existing profile
I1107 17:17:05.728247 265599 start.go:282] selected driver: docker
I1107 17:17:05.728267 265599 start.go:808] validating driver "docker" against &{Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:05.728376 265599 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:17:05.728459 265599 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:05.834872 265599 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:54 SystemTime:2022-11-07 17:17:05.751080253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:05.835529 265599 cni.go:95] Creating CNI manager for ""
I1107 17:17:05.835549 265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 17:17:05.835565 265599 start_flags.go:317] config:
{Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:05.839858 265599 out.go:177] * Starting control plane node pause-171530 in cluster pause-171530
I1107 17:17:05.841800 265599 cache.go:120] Beginning downloading kic base image for docker with docker
I1107 17:17:05.844023 265599 out.go:177] * Pulling base image ...
I1107 17:17:05.845691 265599 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:05.845756 265599 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1107 17:17:05.845776 265599 cache.go:57] Caching tarball of preloaded images
I1107 17:17:05.845787 265599 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 17:17:05.846094 265599 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1107 17:17:05.846111 265599 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1107 17:17:05.846271 265599 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/config.json ...
I1107 17:17:05.873255 265599 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 17:17:05.873279 265599 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 17:17:05.873289 265599 cache.go:208] Successfully downloaded all kic artifacts
I1107 17:17:05.873322 265599 start.go:364] acquiring machines lock for pause-171530: {Name:mk2020e0b0b9cf87e78302c105d3589b81431a7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 17:17:05.873408 265599 start.go:368] acquired machines lock for "pause-171530" in 65.893µs
I1107 17:17:05.873440 265599 start.go:96] Skipping create...Using existing machine configuration
I1107 17:17:05.873452 265599 fix.go:55] fixHost starting:
I1107 17:17:05.873695 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:05.904476 265599 fix.go:103] recreateIfNeeded on pause-171530: state=Running err=<nil>
W1107 17:17:05.904506 265599 fix.go:129] unexpected machine state, will restart: <nil>
I1107 17:17:05.907141 265599 out.go:177] * Updating the running docker "pause-171530" container ...
I1107 17:17:05.908814 265599 machine.go:88] provisioning docker machine ...
I1107 17:17:05.908865 265599 ubuntu.go:169] provisioning hostname "pause-171530"
I1107 17:17:05.908920 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:05.936340 265599 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:05.936536 265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49369 <nil> <nil>}
I1107 17:17:05.936554 265599 main.go:134] libmachine: About to run SSH command:
sudo hostname pause-171530 && echo "pause-171530" | sudo tee /etc/hostname
I1107 17:17:06.063498 265599 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-171530
I1107 17:17:06.063580 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:06.089781 265599 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:06.089944 265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49369 <nil> <nil>}
I1107 17:17:06.089970 265599 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\spause-171530' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-171530/g' /etc/hosts;
else
echo '127.0.1.1 pause-171530' | sudo tee -a /etc/hosts;
fi
fi
I1107 17:17:06.206772 265599 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:17:06.206802 265599 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
I1107 17:17:06.206824 265599 ubuntu.go:177] setting up certificates
I1107 17:17:06.206833 265599 provision.go:83] configureAuth start
I1107 17:17:06.206876 265599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-171530
I1107 17:17:06.233722 265599 provision.go:138] copyHostCerts
I1107 17:17:06.233807 265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
I1107 17:17:06.233825 265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
I1107 17:17:06.233906 265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
I1107 17:17:06.233996 265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
I1107 17:17:06.234011 265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
I1107 17:17:06.234043 265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
I1107 17:17:06.234121 265599 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
I1107 17:17:06.234137 265599 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
I1107 17:17:06.234180 265599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
I1107 17:17:06.234287 265599 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.pause-171530 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube pause-171530]
I1107 17:17:06.399357 265599 provision.go:172] copyRemoteCerts
I1107 17:17:06.399419 265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 17:17:06.399460 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:06.426411 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:06.514347 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 17:17:06.533219 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I1107 17:17:06.553157 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1107 17:17:06.573469 265599 provision.go:86] duration metric: configureAuth took 366.618787ms
I1107 17:17:06.573508 265599 ubuntu.go:193] setting minikube options for container-runtime
I1107 17:17:06.573739 265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:06.573831 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:06.601570 265599 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:06.601719 265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49369 <nil> <nil>}
I1107 17:17:06.601733 265599 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1107 17:17:06.719151 265599 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1107 17:17:06.719182 265599 ubuntu.go:71] root file system type: overlay
I1107 17:17:06.719350 265599 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1107 17:17:06.719411 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:06.746626 265599 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:06.746845 265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49369 <nil> <nil>}
I1107 17:17:06.746914 265599 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1107 17:17:06.872971 265599 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1107 17:17:06.873051 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:06.902094 265599 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:06.902277 265599 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49369 <nil> <nil>}
I1107 17:17:06.902307 265599 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1107 17:17:07.027043 265599 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:17:07.027081 265599 machine.go:91] provisioned docker machine in 1.118240745s
I1107 17:17:07.027091 265599 start.go:300] post-start starting for "pause-171530" (driver="docker")
I1107 17:17:07.027101 265599 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 17:17:07.027157 265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 17:17:07.027203 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:07.055663 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:07.152315 265599 ssh_runner.go:195] Run: cat /etc/os-release
I1107 17:17:07.155419 265599 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 17:17:07.155449 265599 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 17:17:07.155461 265599 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 17:17:07.155469 265599 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 17:17:07.155484 265599 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
I1107 17:17:07.155537 265599 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
I1107 17:17:07.155621 265599 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
I1107 17:17:07.155717 265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 17:17:07.163115 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:07.259375 265599 start.go:303] post-start completed in 232.268718ms
I1107 17:17:07.259457 265599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 17:17:07.259504 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:07.292327 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:07.380341 265599 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 17:17:07.384765 265599 fix.go:57] fixHost completed within 1.511308744s
I1107 17:17:07.384788 265599 start.go:83] releasing machines lock for "pause-171530", held for 1.511368311s
I1107 17:17:07.384864 265599 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-171530
I1107 17:17:07.413876 265599 ssh_runner.go:195] Run: systemctl --version
I1107 17:17:07.413938 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:07.413976 265599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1107 17:17:07.414049 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:07.447827 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:07.448603 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:07.565735 265599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1107 17:17:07.580677 265599 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1107 17:17:07.580749 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 17:17:07.595113 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1107 17:17:07.609542 265599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1107 17:17:07.717706 265599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1107 17:17:07.844274 265599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:07.951423 265599 ssh_runner.go:195] Run: sudo systemctl restart docker
I1107 17:17:24.054911 265599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.103437064s)
I1107 17:17:24.054984 265599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1107 17:17:24.265227 265599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:24.361565 265599 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1107 17:17:24.371575 265599 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1107 17:17:24.371644 265599 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1107 17:17:24.374825 265599 start.go:472] Will wait 60s for crictl version
I1107 17:17:24.374887 265599 ssh_runner.go:195] Run: sudo crictl version
I1107 17:17:24.405233 265599 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1107 17:17:24.405294 265599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:24.433798 265599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:24.469003 265599 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1107 17:17:24.469098 265599 cli_runner.go:164] Run: docker network inspect pause-171530 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:17:24.494258 265599 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1107 17:17:24.497966 265599 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:24.498057 265599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:24.522994 265599 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:24.523018 265599 docker.go:543] Images already preloaded, skipping extraction
I1107 17:17:24.523070 265599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:24.547938 265599 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:3.6
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:24.547962 265599 cache_images.go:84] Images are preloaded, skipping loading
I1107 17:17:24.548029 265599 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1107 17:17:24.624335 265599 cni.go:95] Creating CNI manager for ""
I1107 17:17:24.624371 265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 17:17:24.624381 265599 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 17:17:24.624400 265599 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-171530 NodeName:pause-171530 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 17:17:24.624599 265599 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "pause-171530"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 17:17:24.624734 265599 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-171530 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1107 17:17:24.624798 265599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1107 17:17:24.634057 265599 binaries.go:44] Found k8s binaries, skipping transfer
I1107 17:17:24.634131 265599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 17:17:24.641098 265599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (474 bytes)
I1107 17:17:24.654380 265599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1107 17:17:24.668303 265599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2035 bytes)
I1107 17:17:24.681843 265599 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1107 17:17:24.685111 265599 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530 for IP: 192.168.85.2
I1107 17:17:24.685219 265599 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
I1107 17:17:24.685293 265599 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
I1107 17:17:24.685377 265599 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key
I1107 17:17:24.685457 265599 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.key.43b9df8c
I1107 17:17:24.685521 265599 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.key
I1107 17:17:24.685626 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
W1107 17:17:24.685663 265599 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
I1107 17:17:24.685686 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
I1107 17:17:24.685722 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
I1107 17:17:24.685755 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
I1107 17:17:24.685791 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
I1107 17:17:24.685845 265599 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:24.686475 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 17:17:24.705101 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1107 17:17:24.724639 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 17:17:24.742006 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1107 17:17:24.760861 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 17:17:24.780509 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1107 17:17:24.799673 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 17:17:24.819781 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1107 17:17:24.839265 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
I1107 17:17:24.857824 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
I1107 17:17:24.876187 265599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 17:17:24.894054 265599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 17:17:24.907411 265599 ssh_runner.go:195] Run: openssl version
I1107 17:17:24.912881 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 17:17:24.921594 265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:24.925481 265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:24.925551 265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:24.930905 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 17:17:24.938422 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
I1107 17:17:24.946334 265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
I1107 17:17:24.949621 265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/10129.pem
I1107 17:17:24.949680 265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
I1107 17:17:24.955062 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
I1107 17:17:24.962592 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
I1107 17:17:24.970789 265599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
I1107 17:17:24.974091 265599 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/101292.pem
I1107 17:17:24.974155 265599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
I1107 17:17:24.979020 265599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
I1107 17:17:24.986014 265599 kubeadm.go:396] StartCluster: {Name:pause-171530 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:pause-171530 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:24.986135 265599 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1107 17:17:25.008611 265599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 17:17:25.015852 265599 kubeadm.go:411] found existing configuration files, will attempt cluster restart
I1107 17:17:25.015875 265599 kubeadm.go:627] restartCluster start
I1107 17:17:25.015912 265599 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1107 17:17:25.022497 265599 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1107 17:17:25.023332 265599 kubeconfig.go:92] found "pause-171530" server: "https://192.168.85.2:8443"
I1107 17:17:25.024642 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:25.025325 265599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1107 17:17:25.032719 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:25.032770 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:25.041455 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:25.241876 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:25.241954 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:25.253157 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:25.442363 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:25.442456 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:25.451732 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:25.642291 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:25.642376 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:25.653784 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:25.842094 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:25.842176 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:25.851161 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:26.042481 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:26.042570 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:26.051935 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:26.242166 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:26.242259 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:26.252068 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:26.442444 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:26.442521 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:26.452466 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:26.641665 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:26.641756 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:26.651194 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:26.842411 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:26.842497 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:26.852123 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:27.042449 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:27.042520 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:27.051754 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:27.242055 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:27.242125 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:27.252675 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:27.442021 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:27.442086 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:27.452028 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:27.642321 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:27.642396 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:27.655163 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:27.842316 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:27.842406 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:27.919765 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:28.042018 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:28.042091 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:28.066147 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:28.066178 265599 api_server.go:165] Checking apiserver status ...
I1107 17:17:28.066222 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W1107 17:17:28.134945 265599 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:
stderr:
I1107 17:17:28.134975 265599 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
I1107 17:17:28.134983 265599 kubeadm.go:1114] stopping kube-system containers ...
I1107 17:17:28.135044 265599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1107 17:17:28.245468 265599 docker.go:444] Stopping containers: [bc4811d3f9f1 7c093d736ba0 42f2c39561b1 c9629a7195e0 c109021f97b0 cdc8d9ab8c01 6977abb3bdd5 70d021ab7352 509fa11824cf 0d39f99a8173 1ed4b2e0931b 6a6aa007d5d6 72b0bc6dbf86 307735ded540 925a87ac16d6 6d1abd3e30d7 29682c53aad3 c0cb5971f049 69420675fbf2 a947b84a16e9 cd9fbcf66902 0329006f68e6 0c9d8ff11e72]
I1107 17:17:28.245557 265599 ssh_runner.go:195] Run: docker stop bc4811d3f9f1 7c093d736ba0 42f2c39561b1 c9629a7195e0 c109021f97b0 cdc8d9ab8c01 6977abb3bdd5 70d021ab7352 509fa11824cf 0d39f99a8173 1ed4b2e0931b 6a6aa007d5d6 72b0bc6dbf86 307735ded540 925a87ac16d6 6d1abd3e30d7 29682c53aad3 c0cb5971f049 69420675fbf2 a947b84a16e9 cd9fbcf66902 0329006f68e6 0c9d8ff11e72
I1107 17:17:28.885183 265599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1107 17:17:28.970717 265599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:17:28.980900 265599 kubeadm.go:155] found existing configuration files:
-rw------- 1 root root 5639 Nov 7 17:15 /etc/kubernetes/admin.conf
-rw------- 1 root root 5656 Nov 7 17:15 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1987 Nov 7 17:15 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5604 Nov 7 17:15 /etc/kubernetes/scheduler.conf
I1107 17:17:28.980971 265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1107 17:17:28.989995 265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1107 17:17:28.997436 265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1107 17:17:29.005331 265599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1107 17:17:29.005401 265599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1107 17:17:29.016147 265599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1107 17:17:29.025419 265599 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1107 17:17:29.025483 265599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1107 17:17:29.034956 265599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:17:29.043620 265599 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I1107 17:17:29.043651 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:29.101436 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:29.752261 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:29.918922 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:29.991213 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:30.133407 265599 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:30.133501 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:30.646530 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:31.146257 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:31.646851 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:31.727916 265599 api_server.go:71] duration metric: took 1.594511389s to wait for apiserver process to appear ...
I1107 17:17:31.727946 265599 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:31.727959 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:34.924268 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1107 17:17:34.924304 265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1107 17:17:35.424809 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:35.429498 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1107 17:17:35.429540 265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1107 17:17:35.925106 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:35.931883 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1107 17:17:35.931924 265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1107 17:17:36.424461 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:36.430147 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:36.437609 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:36.437636 265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
I1107 17:17:36.437645 265599 cni.go:95] Creating CNI manager for ""
I1107 17:17:36.437652 265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 17:17:36.437659 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:36.447744 265599 system_pods.go:59] 6 kube-system pods found
I1107 17:17:36.447788 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1107 17:17:36.447801 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1107 17:17:36.447812 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1107 17:17:36.447823 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1107 17:17:36.447833 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1107 17:17:36.447851 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:36.447860 265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
I1107 17:17:36.447873 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:36.452085 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:36.452127 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:36.452142 265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
I1107 17:17:36.452169 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:36.655569 265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1107 17:17:36.659806 265599 kubeadm.go:778] kubelet initialised
I1107 17:17:36.659830 265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
I1107 17:17:36.659837 265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:36.664724 265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:38.678405 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:40.678711 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:42.751499 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:45.178920 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:45.178953 265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:45.178969 265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190344 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:47.190385 265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190401 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703190 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.703227 265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703241 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708302 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.708326 265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708335 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713353 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.713373 265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713382 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718276 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.718298 265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718308 265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.718326 265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1107 17:17:48.725688 265599 ops.go:34] apiserver oom_adj: -16
I1107 17:17:48.725713 265599 kubeadm.go:631] restartCluster took 23.70983267s
I1107 17:17:48.725723 265599 kubeadm.go:398] StartCluster complete in 23.739715552s
I1107 17:17:48.725742 265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.725827 265599 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:48.727240 265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.728367 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.731431 265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
I1107 17:17:48.731509 265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 17:17:48.735381 265599 out.go:177] * Verifying Kubernetes components...
I1107 17:17:48.731563 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1107 17:17:48.731586 265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I1107 17:17:48.731727 265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:48.737019 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:48.737075 265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
I1107 17:17:48.737103 265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
I1107 17:17:48.737073 265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
I1107 17:17:48.737183 265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
W1107 17:17:48.737191 265599 addons.go:236] addon storage-provisioner should already be in state true
I1107 17:17:48.737247 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.737345 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.737690 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.748838 265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755501 265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
I1107 17:17:48.755530 265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755544 265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.774070 265599 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:17:48.776053 265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.776086 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1107 17:17:48.776141 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.780418 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.783994 265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
W1107 17:17:48.784033 265599 addons.go:236] addon default-storageclass should already be in state true
I1107 17:17:48.784066 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.784533 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.791755 265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.827118 265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:48.827146 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1107 17:17:48.827202 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.832614 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.844192 265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1107 17:17:48.858350 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.935269 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.958923 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:49.187938 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.187970 265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.187985 265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588753 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.588785 265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588799 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.758403 265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I1107 17:17:49.760036 265599 addons.go:488] enableAddons completed in 1.028452371s
I1107 17:17:49.988064 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.988085 265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.988096 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387943 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.387964 265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387975 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787240 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.787266 265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787279 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187853 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:51.187885 265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187896 265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:51.187921 265599 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:51.187970 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:51.198604 265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
I1107 17:17:51.198640 265599 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:51.198650 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:51.203228 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:51.204215 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:51.204244 265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
I1107 17:17:51.204255 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:51.389884 265599 system_pods.go:59] 7 kube-system pods found
I1107 17:17:51.389918 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.389923 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.389927 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.389932 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.389936 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.389940 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.389944 265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.389949 265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
I1107 17:17:51.389958 265599 default_sa.go:34] waiting for default service account to be created ...
I1107 17:17:51.587856 265599 default_sa.go:45] found service account: "default"
I1107 17:17:51.587885 265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
I1107 17:17:51.587896 265599 system_pods.go:116] waiting for k8s-apps to be running ...
I1107 17:17:51.791610 265599 system_pods.go:86] 7 kube-system pods found
I1107 17:17:51.791656 265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.791666 265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.791683 265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.791692 265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.791699 265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.791707 265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.791717 265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.791725 265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
I1107 17:17:51.791734 265599 system_svc.go:44] waiting for kubelet service to be running ....
I1107 17:17:51.791785 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:51.802112 265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
I1107 17:17:51.802147 265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1107 17:17:51.802170 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:51.987329 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:51.987365 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:51.987379 265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
I1107 17:17:51.987392 265599 start.go:217] waiting for startup goroutines ...
I1107 17:17:51.987763 265599 ssh_runner.go:195] Run: rm -f paused
I1107 17:17:52.043023 265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
I1107 17:17:52.045707 265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-171530
helpers_test.go:235: (dbg) docker inspect pause-171530:
-- stdout --
[
{
"Id": "e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550",
"Created": "2022-11-07T17:15:38.935447727Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 241803,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-11-07T17:15:39.387509554Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hostname",
"HostsPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hosts",
"LogPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550-json.log",
"Name": "/pause-171530",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"pause-171530:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-171530",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886-init/diff:/var/lib/docker/overlay2/2fd1fc00a589bf61b81b15f5596b1c421509b0ed94a0073de8df35851e0104fd/diff:/var/lib/docker/overlay2/ca94f1e5c7c58ab040213044ce029a51c1ea19ec2ae58d30e36b7c461dac5b75/diff:/var/lib/docker/overlay2/e42a9a60bb0ccca9f6ebc3bec24f638bafba48d604bd99af2d43cee1225c9466/diff:/var/lib/docker/overlay2/3474eef000daf16045ddcd082155e02d3adc432e026d93a79f6650da6b7bbe2c/diff:/var/lib/docker/overlay2/2c37502622a619527bab9f0e94b3c9e8ea823ff6ffdc84760dfeca0a7a1d2ba9/diff:/var/lib/docker/overlay2/c89ceddb787dc6015274fbee4e47c019bcb7637c523d5d053aafccc75f2d8c5b/diff:/var/lib/docker/overlay2/d13aa31ebe50e77225149ff2f5361d34b4b4dcbeb3b0bc0a15e35f3d4a8b7756/diff:/var/lib/docker/overlay2/c95f6f4ff58fc27002c40206891dabcbf4ed1b39c8f3584432f15b72a15920c1/diff:/var/lib/docker/overlay2/609367ca657fad1a480fd0d0075ab9d34c5556928b3f753bf75b7937a8b74ee8/diff:/var/lib/docker/overlay2/02a742
81aea9f2e787ac6f6c4ac9f7d01ae11e33439e4787dff010ca49918d6b/diff:/var/lib/docker/overlay2/97be1349403116decda81fc5f089a2db445d4c5a72b26e4fa1d2d69bc8f5b867/diff:/var/lib/docker/overlay2/0a0a5163f70151b385895e742fd238ec8e8e4f76def9c619677619db2a6d5b08/diff:/var/lib/docker/overlay2/5659ee0023498bf40cbbec8f9a2f0fddfc95419655c96d6605a451a2c46c6036/diff:/var/lib/docker/overlay2/490c47e44446d2723d18ba6ae67ce415128dbc5fd055c8b0c3af734b0a072691/diff:/var/lib/docker/overlay2/303dd4de2e78ffebe2a8b0327ff89f434f0d94efec1239397b26f584669c6688/diff:/var/lib/docker/overlay2/57cd5e60d0e6efc4eba5b1d3312be411722b2dbe779b38d7e29451cb53536ed6/diff:/var/lib/docker/overlay2/ebe05a325862fb9343e31e938f8b0cbebb9eac74b601c1cbd7c51d82932d20b4/diff:/var/lib/docker/overlay2/8536312e6228bdf272e430339824f16762dc9bb32d3fbcd5a2704ed1cbd37e64/diff:/var/lib/docker/overlay2/2598be8b2bb739fc75e87aee71f5af665456fffb16f599676335c74f15ae6391/diff:/var/lib/docker/overlay2/4d2d35e9d340ea3932b4095e279f70853bcd0793bb323921891c0c769627f2c5/diff:/var/lib/d
ocker/overlay2/4d826174051f4f89d8c7f9e2a1c0deeedf4fe1375b7e4805b1507830dfcb85eb/diff:/var/lib/docker/overlay2/04619ad2580acc4047033104b728374c0bcab41b326af981fd92107ded6f8715/diff:/var/lib/docker/overlay2/653c7b7d9b3ff747507ce6d4c8750195142e3c1e5dd8776d1f5ad68da192b0c3/diff:/var/lib/docker/overlay2/7feba1b41892a093a69f3006a5955540f607a8c16986fd594da627470dc20b50/diff:/var/lib/docker/overlay2/edfa060eb3735b8c7368bfa84da65c47f0381d016fcb1f23338cbe984ffb4309/diff:/var/lib/docker/overlay2/7bc7096889faa87a4f3542932b25941d0cb3ebdca2eb7a8323c0b437c946ca84/diff:/var/lib/docker/overlay2/6d9c19e156f90bc4ce093d160661251be6f95a51a9e0712f2a79c6a08cd996cd/diff:/var/lib/docker/overlay2/f5ba9cd7997e8cdfc6fb27c76c069767b07cc8201e7e0ef7c1a3ffa443525fb1/diff:/var/lib/docker/overlay2/43277eab35f847188e2fbacd196549314d6463948690b6eb7218cfe6ecc19b17/diff:/var/lib/docker/overlay2/ef090d552b4022f86d7bdf79bbc298e347a3e535c804f65b2d33683e0864901d/diff:/var/lib/docker/overlay2/8ef9f5644e2d99ddd144a8c44988dff320901634fa10fdd2ceb63b44464
942d2/diff:/var/lib/docker/overlay2/8db604496435b1f4a13ceca647b7f365eccc2122c46c001b46d3343020dce882/diff:/var/lib/docker/overlay2/aa63ff25f14d23e22d30a5f6ffdca4dc610d3a56fda7fcf8128955229e8179ac/diff:/var/lib/docker/overlay2/d8e836f399115dec3f57c3bdae8cfe9459ca00fb4db1619f7c32a54c17f2696a/diff:/var/lib/docker/overlay2/e8706f9f543307c51f76840c008a49519273628b367c558c81472382319ee067/diff:/var/lib/docker/overlay2/410562df42124ab024d1aed6c452424839223794de2fac149e33e3a2aaad7db5/diff:/var/lib/docker/overlay2/24ba0b84d34cf83f31c6e6420465d970cd940052bc918b875c8320dfbeccb3fc/diff:/var/lib/docker/overlay2/cfd31a3b8ba33133312104bac0d05c9334975dd18cb3dfff6ba901668d8935cb/diff:/var/lib/docker/overlay2/2bfc0a7a2746e54d77a9a1838e077ca17b8bd024966ed7fc7f4cfceffc1e41c9/diff:/var/lib/docker/overlay2/67ae264c7fe2b9c7f659d1bbdccdc178c34230e3b6aa815b7f3ff24d50f1ca5a/diff:/var/lib/docker/overlay2/2f921d0a0caaca67918401f3f9b193c0e89b931f174e447a79ba82b2a5743c6e/diff:/var/lib/docker/overlay2/8f6f97c7885b0f2745adf21261ead041f0b7ce
88d0ab325cfafd1cf3b9aa07f3/diff",
"MergedDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/merged",
"UpperDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/diff",
"WorkDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-171530",
"Source": "/var/lib/docker/volumes/pause-171530/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-171530",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-171530",
"name.minikube.sigs.k8s.io": "pause-171530",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a9adb1a46308a44769722d4564542b00b60699767153f3cfdcf9adf8a13796ed",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49369"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49368"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49365"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49367"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49366"
}
]
},
"SandboxKey": "/var/run/docker/netns/a9adb1a46308",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-171530": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": [
"e3da15937387",
"pause-171530"
],
"NetworkID": "39ab6118a516dd29e38bb2d528840c29808f0aaff829c163fb133591392f975d",
"EndpointID": "f05b8ecc16b4a46e2d24102363dbe97c03cc31d021c5d068a263b87ac53329f9",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-171530 -n pause-171530
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-171530 logs -n 25
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-171530 logs -n 25: (1.37022974s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | cert-options-171318 ssh | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-171318 -- sudo | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-171318 | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| ssh | docker-flags-171335 ssh | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-171335 ssh | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-171335 | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| start | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p missing-upgrade-171351 | missing-upgrade-171351 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p stopped-upgrade-171343 | stopped-upgrade-171343 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| stop | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| delete | -p stopped-upgrade-171343 | stopped-upgrade-171343 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
| start | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p missing-upgrade-171351 | missing-upgrade-171351 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
| start | -p pause-171530 --memory=2048 | pause-171530 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:17 UTC |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p cert-expiration-171219 | cert-expiration-171219 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p running-upgrade-171507 | running-upgrade-171507 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p running-upgrade-171507 | running-upgrade-171507 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
| start | -p auto-171300 --memory=2048 | auto-171300 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p cert-expiration-171219 | cert-expiration-171219 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
| start | -p kindnet-171300 | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=kindnet --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-171530 | pause-171530 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p kindnet-171300 pgrep -a | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | kubelet | | | | | |
| delete | -p kindnet-171300 | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| start | -p cilium-171301 --memory=2048 | cilium-171301 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=cilium --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p auto-171300 pgrep -a | auto-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | kubelet | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/07 17:17:39
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1107 17:17:39.909782 273963 out.go:296] Setting OutFile to fd 1 ...
I1107 17:17:39.909910 273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:39.909920 273963 out.go:309] Setting ErrFile to fd 2...
I1107 17:17:39.909925 273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:39.910036 273963 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
I1107 17:17:39.910611 273963 out.go:303] Setting JSON to false
I1107 17:17:39.912756 273963 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3611,"bootTime":1667837849,"procs":1171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1107 17:17:39.912825 273963 start.go:126] virtualization: kvm guest
I1107 17:17:39.916343 273963 out.go:177] * [cilium-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I1107 17:17:39.918167 273963 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:17:39.918122 273963 notify.go:220] Checking for updates...
I1107 17:17:39.919930 273963 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:17:39.921709 273963 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:39.923329 273963 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
I1107 17:17:39.924851 273963 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1107 17:17:39.927024 273963 config.go:180] Loaded profile config "auto-171300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927142 273963 config.go:180] Loaded profile config "kubernetes-upgrade-171418": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927235 273963 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927287 273963 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:17:39.959963 273963 docker.go:137] docker version: linux-20.10.21
I1107 17:17:39.960043 273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:40.066046 273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:39.981648038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:40.066199 273963 docker.go:254] overlay module found
I1107 17:17:40.069246 273963 out.go:177] * Using the docker driver based on user configuration
I1107 17:17:40.070821 273963 start.go:282] selected driver: docker
I1107 17:17:40.070848 273963 start.go:808] validating driver "docker" against <nil>
I1107 17:17:40.070871 273963 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:17:40.072076 273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:40.184024 273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:40.095572549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:40.184162 273963 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1107 17:17:40.184327 273963 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1107 17:17:40.186905 273963 out.go:177] * Using Docker driver with root privileges
I1107 17:17:40.188888 273963 cni.go:95] Creating CNI manager for "cilium"
I1107 17:17:40.188919 273963 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
I1107 17:17:40.188929 273963 start_flags.go:317] config:
{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:40.191042 273963 out.go:177] * Starting control plane node cilium-171301 in cluster cilium-171301
I1107 17:17:40.192756 273963 cache.go:120] Beginning downloading kic base image for docker with docker
I1107 17:17:40.194622 273963 out.go:177] * Pulling base image ...
I1107 17:17:40.196366 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:40.196424 273963 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1107 17:17:40.196439 273963 cache.go:57] Caching tarball of preloaded images
I1107 17:17:40.196478 273963 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 17:17:40.196755 273963 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1107 17:17:40.196770 273963 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1107 17:17:40.196994 273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
I1107 17:17:40.197037 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json: {Name:mke8d5318de654621f86e157b3b792411142e89b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:40.226030 273963 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 17:17:40.226064 273963 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 17:17:40.226085 273963 cache.go:208] Successfully downloaded all kic artifacts
I1107 17:17:40.226119 273963 start.go:364] acquiring machines lock for cilium-171301: {Name:mk73a4f694f74dc8530831944bb92040f98c814b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 17:17:40.226272 273963 start.go:368] acquired machines lock for "cilium-171301" in 128.513µs
I1107 17:17:40.226338 273963 start.go:93] Provisioning new machine with config: &{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 17:17:40.226851 273963 start.go:125] createHost starting for "" (driver="docker")
I1107 17:17:35.925106 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:35.931883 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1107 17:17:35.931924 265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1107 17:17:36.424461 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:36.430147 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:36.437609 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:36.437636 265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
I1107 17:17:36.437645 265599 cni.go:95] Creating CNI manager for ""
I1107 17:17:36.437652 265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 17:17:36.437659 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:36.447744 265599 system_pods.go:59] 6 kube-system pods found
I1107 17:17:36.447788 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1107 17:17:36.447801 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1107 17:17:36.447812 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1107 17:17:36.447823 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1107 17:17:36.447833 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1107 17:17:36.447851 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:36.447860 265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
I1107 17:17:36.447873 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:36.452085 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:36.452127 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:36.452142 265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
I1107 17:17:36.452169 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:36.655569 265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1107 17:17:36.659806 265599 kubeadm.go:778] kubelet initialised
I1107 17:17:36.659830 265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
I1107 17:17:36.659837 265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:36.664724 265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:38.678405 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:39.764430 254808 pod_ready.go:92] pod "coredns-565d847f94-zscpb" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.764470 254808 pod_ready.go:81] duration metric: took 37.51089729s waiting for pod "coredns-565d847f94-zscpb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.764489 254808 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.769704 254808 pod_ready.go:92] pod "etcd-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.769729 254808 pod_ready.go:81] duration metric: took 5.228844ms waiting for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.769741 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.774830 254808 pod_ready.go:92] pod "kube-apiserver-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.774850 254808 pod_ready.go:81] duration metric: took 5.101563ms waiting for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.774863 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.779742 254808 pod_ready.go:92] pod "kube-controller-manager-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.779767 254808 pod_ready.go:81] duration metric: took 4.895957ms waiting for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.779780 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.787718 254808 pod_ready.go:92] pod "kube-proxy-5hjzb" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.787745 254808 pod_ready.go:81] duration metric: took 7.956771ms waiting for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.787759 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:40.161780 254808 pod_ready.go:92] pod "kube-scheduler-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:40.161804 254808 pod_ready.go:81] duration metric: took 374.038459ms waiting for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:40.161812 254808 pod_ready.go:38] duration metric: took 39.930959656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:40.161836 254808 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:40.161880 254808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:40.174326 254808 api_server.go:71] duration metric: took 40.098096653s to wait for apiserver process to appear ...
I1107 17:17:40.174356 254808 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:40.174385 254808 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I1107 17:17:40.180459 254808 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
ok
I1107 17:17:40.181698 254808 api_server.go:140] control plane version: v1.25.3
I1107 17:17:40.181729 254808 api_server.go:130] duration metric: took 7.366556ms to wait for apiserver health ...
I1107 17:17:40.181739 254808 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:40.365251 254808 system_pods.go:59] 7 kube-system pods found
I1107 17:17:40.365291 254808 system_pods.go:61] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
I1107 17:17:40.365298 254808 system_pods.go:61] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
I1107 17:17:40.365305 254808 system_pods.go:61] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
I1107 17:17:40.365313 254808 system_pods.go:61] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
I1107 17:17:40.365320 254808 system_pods.go:61] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
I1107 17:17:40.365326 254808 system_pods.go:61] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
I1107 17:17:40.365341 254808 system_pods.go:61] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
I1107 17:17:40.365353 254808 system_pods.go:74] duration metric: took 183.607113ms to wait for pod list to return data ...
I1107 17:17:40.365368 254808 default_sa.go:34] waiting for default service account to be created ...
I1107 17:17:40.561571 254808 default_sa.go:45] found service account: "default"
I1107 17:17:40.561596 254808 default_sa.go:55] duration metric: took 196.218934ms for default service account to be created ...
I1107 17:17:40.561604 254808 system_pods.go:116] waiting for k8s-apps to be running ...
I1107 17:17:40.765129 254808 system_pods.go:86] 7 kube-system pods found
I1107 17:17:40.765166 254808 system_pods.go:89] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
I1107 17:17:40.765200 254808 system_pods.go:89] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
I1107 17:17:40.765210 254808 system_pods.go:89] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
I1107 17:17:40.765218 254808 system_pods.go:89] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
I1107 17:17:40.765225 254808 system_pods.go:89] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
I1107 17:17:40.765231 254808 system_pods.go:89] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
I1107 17:17:40.765237 254808 system_pods.go:89] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
I1107 17:17:40.765245 254808 system_pods.go:126] duration metric: took 203.635578ms to wait for k8s-apps to be running ...
I1107 17:17:40.765255 254808 system_svc.go:44] waiting for kubelet service to be running ....
I1107 17:17:40.765298 254808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:40.776269 254808 system_svc.go:56] duration metric: took 11.004445ms WaitForService to wait for kubelet.
I1107 17:17:40.776304 254808 kubeadm.go:573] duration metric: took 40.700080633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1107 17:17:40.776325 254808 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:40.962904 254808 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:40.962940 254808 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:40.962955 254808 node_conditions.go:105] duration metric: took 186.624576ms to run NodePressure ...
I1107 17:17:40.962972 254808 start.go:217] waiting for startup goroutines ...
I1107 17:17:40.963411 254808 ssh_runner.go:195] Run: rm -f paused
I1107 17:17:41.016064 254808 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
I1107 17:17:41.019135 254808 out.go:177] * Done! kubectl is now configured to use "auto-171300" cluster and "default" namespace by default
I1107 17:17:38.938491 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:38.966502 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:38.966589 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:38.992316 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:38.992406 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:39.018933 233006 logs.go:274] 0 containers: []
W1107 17:17:39.018962 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:39.019012 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:39.046418 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:39.046497 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:39.072173 233006 logs.go:274] 0 containers: []
W1107 17:17:39.072208 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:39.072257 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:39.098237 233006 logs.go:274] 0 containers: []
W1107 17:17:39.098266 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:39.098309 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:39.124960 233006 logs.go:274] 0 containers: []
W1107 17:17:39.124989 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:39.125038 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:39.153502 233006 logs.go:274] 3 containers: [8891a1b14e04 1c2c98a4c31a 371287b3c0c6]
I1107 17:17:39.153554 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:39.153570 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:39.193713 233006 logs.go:123] Gathering logs for kube-controller-manager [1c2c98a4c31a] ...
I1107 17:17:39.193770 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2c98a4c31a"
I1107 17:17:39.222940 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:39.222968 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:39.264980 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:39.265019 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:39.306266 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:39.306303 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:39.375563 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:39.375608 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:39.446970 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:39.446997 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:39.447010 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:39.478856 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:39.478893 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:39.551509 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:39.551552 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:39.588201 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:39.588235 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:39.622485 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:39.622531 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:39.711503 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:39.711531 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:39.746571 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:39.746605 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:42.339399 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:42.339827 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:42.439058 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:42.465860 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:42.465945 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:42.503349 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:42.503419 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:42.529180 233006 logs.go:274] 0 containers: []
W1107 17:17:42.529209 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:42.529272 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:42.556348 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:42.556424 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:42.585423 233006 logs.go:274] 0 containers: []
W1107 17:17:42.585457 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:42.585514 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:42.612694 233006 logs.go:274] 0 containers: []
W1107 17:17:42.612730 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:42.612806 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:42.638513 233006 logs.go:274] 0 containers: []
W1107 17:17:42.638534 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:42.638584 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:42.666063 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:42.666121 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:42.666139 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:42.683133 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:42.683163 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:42.718461 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:42.718496 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:42.752314 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:42.752340 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:42.774285 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:42.774322 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:42.808596 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:42.808627 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:42.886659 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:42.886698 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:42.960618 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:42.960656 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:42.960670 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:43.002805 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:43.002858 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:43.082429 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:43.082467 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:43.115843 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:43.115911 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:43.190735 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:43.190775 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:40.229568 273963 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I1107 17:17:40.229875 273963 start.go:159] libmachine.API.Create for "cilium-171301" (driver="docker")
I1107 17:17:40.229916 273963 client.go:168] LocalClient.Create starting
I1107 17:17:40.230045 273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem
I1107 17:17:40.230090 273963 main.go:134] libmachine: Decoding PEM data...
I1107 17:17:40.230115 273963 main.go:134] libmachine: Parsing certificate...
I1107 17:17:40.230183 273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem
I1107 17:17:40.230204 273963 main.go:134] libmachine: Decoding PEM data...
I1107 17:17:40.230217 273963 main.go:134] libmachine: Parsing certificate...
I1107 17:17:40.230581 273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1107 17:17:40.255766 273963 cli_runner.go:211] docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1107 17:17:40.255850 273963 network_create.go:272] running [docker network inspect cilium-171301] to gather additional debugging logs...
I1107 17:17:40.255875 273963 cli_runner.go:164] Run: docker network inspect cilium-171301
W1107 17:17:40.279408 273963 cli_runner.go:211] docker network inspect cilium-171301 returned with exit code 1
I1107 17:17:40.279440 273963 network_create.go:275] error running [docker network inspect cilium-171301]: docker network inspect cilium-171301: exit status 1
stdout:
[]
stderr:
Error: No such network: cilium-171301
I1107 17:17:40.279451 273963 network_create.go:277] output of [docker network inspect cilium-171301]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: cilium-171301
** /stderr **
I1107 17:17:40.279494 273963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:17:40.309079 273963 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-aa8bc6b4377d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:4a:a0:7f}}
I1107 17:17:40.309777 273963 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-46185e74412a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:c3:83:d6}}
I1107 17:17:40.310466 273963 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0004bc5f8] misses:0}
I1107 17:17:40.310501 273963 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1107 17:17:40.310513 273963 network_create.go:115] attempt to create docker network cilium-171301 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1107 17:17:40.310578 273963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-171301 cilium-171301
I1107 17:17:40.390589 273963 network_create.go:99] docker network cilium-171301 192.168.67.0/24 created
I1107 17:17:40.390635 273963 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-171301" container
I1107 17:17:40.390704 273963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1107 17:17:40.426276 273963 cli_runner.go:164] Run: docker volume create cilium-171301 --label name.minikube.sigs.k8s.io=cilium-171301 --label created_by.minikube.sigs.k8s.io=true
I1107 17:17:40.452601 273963 oci.go:103] Successfully created a docker volume cilium-171301
I1107 17:17:40.452735 273963 cli_runner.go:164] Run: docker run --rm --name cilium-171301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --entrypoint /usr/bin/test -v cilium-171301:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1107 17:17:41.261517 273963 oci.go:107] Successfully prepared a docker volume cilium-171301
I1107 17:17:41.261565 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:41.261584 273963 kic.go:179] Starting extracting preloaded images to volume ...
I1107 17:17:41.261639 273963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1107 17:17:44.552998 273963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.291298492s)
I1107 17:17:44.553029 273963 kic.go:188] duration metric: took 3.291442 seconds to extract preloaded images to volume
W1107 17:17:44.553206 273963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1107 17:17:44.553333 273963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1107 17:17:44.659014 273963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-171301 --name cilium-171301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-171301 --network cilium-171301 --ip 192.168.67.2 --volume cilium-171301:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1107 17:17:40.678711 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:42.751499 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:45.178920 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:45.178953 265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:45.178969 265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190344 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:47.190385 265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190401 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703190 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.703227 265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703241 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708302 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.708326 265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708335 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713353 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.713373 265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713382 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718276 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.718298 265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718308 265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.718326 265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1107 17:17:48.725688 265599 ops.go:34] apiserver oom_adj: -16
I1107 17:17:48.725713 265599 kubeadm.go:631] restartCluster took 23.70983267s
I1107 17:17:48.725723 265599 kubeadm.go:398] StartCluster complete in 23.739715552s
I1107 17:17:48.725742 265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.725827 265599 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:48.727240 265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.728367 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.731431 265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
I1107 17:17:48.731509 265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 17:17:48.735381 265599 out.go:177] * Verifying Kubernetes components...
I1107 17:17:45.728936 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:45.729307 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:45.938905 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:45.968231 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:45.968310 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:45.995241 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:45.995316 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:46.024313 233006 logs.go:274] 0 containers: []
W1107 17:17:46.024343 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:46.024394 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:46.054216 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:46.054293 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:46.088627 233006 logs.go:274] 0 containers: []
W1107 17:17:46.088662 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:46.088710 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:46.116330 233006 logs.go:274] 0 containers: []
W1107 17:17:46.116365 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:46.116420 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:46.150637 233006 logs.go:274] 0 containers: []
W1107 17:17:46.150668 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:46.150771 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:46.182148 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:46.182207 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:46.182221 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:46.204275 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:46.204315 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:46.244475 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:46.244515 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:46.337500 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:46.337547 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:46.384737 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:46.384774 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:46.405735 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:46.405772 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:46.443740 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:46.443780 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:46.515276 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:46.515311 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:46.550260 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:46.550314 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:46.632884 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:46.632921 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:46.667751 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:46.667787 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:46.701085 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:46.701121 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:46.780102 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:48.731563 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1107 17:17:48.731586 265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I1107 17:17:48.731727 265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:48.737019 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:48.737075 265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
I1107 17:17:48.737103 265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
I1107 17:17:48.737073 265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
I1107 17:17:48.737183 265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
W1107 17:17:48.737191 265599 addons.go:236] addon storage-provisioner should already be in state true
I1107 17:17:48.737247 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.737345 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.737690 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.748838 265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755501 265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
I1107 17:17:48.755530 265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755544 265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.774070 265599 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:17:45.119361 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Running}}
I1107 17:17:45.160545 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.191402 273963 cli_runner.go:164] Run: docker exec cilium-171301 stat /var/lib/dpkg/alternatives/iptables
I1107 17:17:45.267825 273963 oci.go:144] the created container "cilium-171301" has a running status.
I1107 17:17:45.267856 273963 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa...
I1107 17:17:45.381762 273963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1107 17:17:45.520399 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.581314 273963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1107 17:17:45.581340 273963 kic_runner.go:114] Args: [docker exec --privileged cilium-171301 chown docker:docker /home/docker/.ssh/authorized_keys]
I1107 17:17:45.671973 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.703596 273963 machine.go:88] provisioning docker machine ...
I1107 17:17:45.703639 273963 ubuntu.go:169] provisioning hostname "cilium-171301"
I1107 17:17:45.703689 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:45.732869 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:45.733123 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:45.733143 273963 main.go:134] libmachine: About to run SSH command:
sudo hostname cilium-171301 && echo "cilium-171301" | sudo tee /etc/hostname
I1107 17:17:45.878648 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-171301
I1107 17:17:45.878766 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:45.906394 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:45.906551 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:45.906570 273963 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scilium-171301' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-171301/g' /etc/hosts;
else
echo '127.0.1.1 cilium-171301' | sudo tee -a /etc/hosts;
fi
fi
I1107 17:17:46.027393 273963 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:17:46.027440 273963 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
I1107 17:17:46.027464 273963 ubuntu.go:177] setting up certificates
I1107 17:17:46.027474 273963 provision.go:83] configureAuth start
I1107 17:17:46.027538 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:46.061281 273963 provision.go:138] copyHostCerts
I1107 17:17:46.061348 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
I1107 17:17:46.061366 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
I1107 17:17:46.061441 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
I1107 17:17:46.061560 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
I1107 17:17:46.061575 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
I1107 17:17:46.061617 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
I1107 17:17:46.061749 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
I1107 17:17:46.061764 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
I1107 17:17:46.061801 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
I1107 17:17:46.061863 273963 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.cilium-171301 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-171301]
I1107 17:17:46.253924 273963 provision.go:172] copyRemoteCerts
I1107 17:17:46.253999 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 17:17:46.254047 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.296985 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:46.384442 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 17:17:46.404309 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1107 17:17:46.427506 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1107 17:17:46.449504 273963 provision.go:86] duration metric: configureAuth took 422.011748ms
I1107 17:17:46.449540 273963 ubuntu.go:193] setting minikube options for container-runtime
I1107 17:17:46.449738 273963 config.go:180] Loaded profile config "cilium-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:46.449813 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.481398 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.481541 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.481555 273963 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1107 17:17:46.599328 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1107 17:17:46.599354 273963 ubuntu.go:71] root file system type: overlay
I1107 17:17:46.599539 273963 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1107 17:17:46.599598 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.629056 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.629241 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.629343 273963 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1107 17:17:46.770161 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1107 17:17:46.770248 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.799041 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.799188 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.799207 273963 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1107 17:17:47.547232 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-07 17:17:46.766442749 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1107 17:17:47.547272 273963 machine.go:91] provisioned docker machine in 1.84364984s
I1107 17:17:47.547283 273963 client.go:171] LocalClient.Create took 7.317360133s
I1107 17:17:47.547304 273963 start.go:167] duration metric: libmachine.API.Create for "cilium-171301" took 7.317430541s
I1107 17:17:47.547312 273963 start.go:300] post-start starting for "cilium-171301" (driver="docker")
I1107 17:17:47.547320 273963 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 17:17:47.547382 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 17:17:47.547424 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.580680 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.670961 273963 ssh_runner.go:195] Run: cat /etc/os-release
I1107 17:17:47.674334 273963 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 17:17:47.674370 273963 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 17:17:47.674379 273963 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 17:17:47.674385 273963 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 17:17:47.674395 273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
I1107 17:17:47.674457 273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
I1107 17:17:47.674531 273963 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
I1107 17:17:47.674630 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 17:17:47.682576 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:47.702345 273963 start.go:303] post-start completed in 155.016776ms
I1107 17:17:47.702863 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:47.729269 273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
I1107 17:17:47.729653 273963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 17:17:47.729754 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.754933 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.839677 273963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 17:17:47.843908 273963 start.go:128] duration metric: createHost completed in 7.617038008s
I1107 17:17:47.843931 273963 start.go:83] releasing machines lock for "cilium-171301", held for 7.617622807s
I1107 17:17:47.844011 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:47.870280 273963 ssh_runner.go:195] Run: systemctl --version
I1107 17:17:47.870346 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.870364 273963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1107 17:17:47.870434 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.897797 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.898053 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:48.013979 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1107 17:17:48.022299 273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I1107 17:17:48.037257 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.110172 273963 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1107 17:17:48.198655 273963 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1107 17:17:48.210409 273963 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1107 17:17:48.210475 273963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 17:17:48.222331 273963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1107 17:17:48.238231 273963 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1107 17:17:48.324359 273963 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1107 17:17:48.401465 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.479636 273963 ssh_runner.go:195] Run: sudo systemctl restart docker
I1107 17:17:48.709599 273963 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1107 17:17:48.829234 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.915216 273963 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1107 17:17:48.926795 273963 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1107 17:17:48.926878 273963 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1107 17:17:48.930979 273963 start.go:472] Will wait 60s for crictl version
I1107 17:17:48.931044 273963 ssh_runner.go:195] Run: sudo crictl version
I1107 17:17:48.968172 273963 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1107 17:17:48.968235 273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:49.004145 273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:48.776053 265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.776086 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1107 17:17:48.776141 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.780418 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.783994 265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
W1107 17:17:48.784033 265599 addons.go:236] addon default-storageclass should already be in state true
I1107 17:17:48.784066 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.784533 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.791755 265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.827118 265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:48.827146 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1107 17:17:48.827202 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.832614 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.844192 265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1107 17:17:48.858350 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.935269 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.958923 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:49.187938 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.187970 265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.187985 265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588753 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.588785 265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588799 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.758403 265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I1107 17:17:49.040144 273963 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1107 17:17:49.040219 273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:17:49.069531 273963 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1107 17:17:49.072992 273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 17:17:49.083058 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:49.083116 273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:49.107581 273963 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:49.107611 273963 docker.go:543] Images already preloaded, skipping extraction
I1107 17:17:49.107668 273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:49.133204 273963 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:49.133245 273963 cache_images.go:84] Images are preloaded, skipping loading
I1107 17:17:49.133295 273963 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1107 17:17:49.206522 273963 cni.go:95] Creating CNI manager for "cilium"
I1107 17:17:49.206553 273963 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 17:17:49.206574 273963 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-171301 NodeName:cilium-171301 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 17:17:49.206774 273963 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "cilium-171301"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 17:17:49.206866 273963 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-171301 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
I1107 17:17:49.206924 273963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1107 17:17:49.215024 273963 binaries.go:44] Found k8s binaries, skipping transfer
I1107 17:17:49.215106 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 17:17:49.223091 273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I1107 17:17:49.237727 273963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1107 17:17:49.251298 273963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I1107 17:17:49.265109 273963 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1107 17:17:49.268700 273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 17:17:49.278537 273963 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301 for IP: 192.168.67.2
I1107 17:17:49.278656 273963 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
I1107 17:17:49.278710 273963 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
I1107 17:17:49.278784 273963 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key
I1107 17:17:49.278798 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt with IP's: []
I1107 17:17:49.377655 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt ...
I1107 17:17:49.377689 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: {Name:mk85045205a0f3cc9db16d3ba4384eb58e4d4170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.377932 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key ...
I1107 17:17:49.377950 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key: {Name:mk22ddbbc0c35976a622861a2537590ceb2c3529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.378071 273963 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e
I1107 17:17:49.378101 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1107 17:17:49.717401 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e ...
I1107 17:17:49.717449 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e: {Name:mk1d0b418ed1d3c777ce02b789369b0a0920bca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.717668 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e ...
I1107 17:17:49.717686 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e: {Name:mkad3745d4acb3a4df279ae7d626aaef591fc7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.717800 273963 certs.go:320] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt
I1107 17:17:49.717875 273963 certs.go:324] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key
I1107 17:17:49.717938 273963 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key
I1107 17:17:49.717957 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt with IP's: []
I1107 17:17:49.788111 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt ...
I1107 17:17:49.788144 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt: {Name:mk4ef43b9fbc1a2c60e066e8c2245294f6e4a088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.788346 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key ...
I1107 17:17:49.788363 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key: {Name:mk3536bb270258df328f9904013708493e9e5cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.788581 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
W1107 17:17:49.788630 273963 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
I1107 17:17:49.788648 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
I1107 17:17:49.788683 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
I1107 17:17:49.788717 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
I1107 17:17:49.788750 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
I1107 17:17:49.788805 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:49.789402 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 17:17:49.809402 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1107 17:17:49.828363 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 17:17:49.851556 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1107 17:17:49.875238 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 17:17:49.895507 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1107 17:17:49.917493 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 17:17:49.938898 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1107 17:17:49.958074 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 17:17:49.976967 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
I1107 17:17:49.997249 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
I1107 17:17:50.022620 273963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 17:17:50.037986 273963 ssh_runner.go:195] Run: openssl version
I1107 17:17:50.043912 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
I1107 17:17:50.052548 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
I1107 17:17:50.056053 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/10129.pem
I1107 17:17:50.056137 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
I1107 17:17:50.061307 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
I1107 17:17:50.069615 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
I1107 17:17:50.079805 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
I1107 17:17:50.084296 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/101292.pem
I1107 17:17:50.084356 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
I1107 17:17:50.090328 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
I1107 17:17:50.099164 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 17:17:50.110113 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.114343 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.114408 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.120637 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 17:17:50.130809 273963 kubeadm.go:396] StartCluster: {Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:50.130955 273963 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1107 17:17:50.158917 273963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 17:17:50.166269 273963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:17:50.174871 273963 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:17:50.174936 273963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:17:50.184105 273963 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:17:50.184164 273963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:17:50.239005 273963 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I1107 17:17:50.239098 273963 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:17:50.279571 273963 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:17:50.279660 273963 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:17:50.279716 273963 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:17:50.279780 273963 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:17:50.279825 273963 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:17:50.279866 273963 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:17:50.279907 273963 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:17:50.279948 273963 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:17:50.279989 273963 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:17:50.280029 273963 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:17:50.280070 273963 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:17:50.280109 273963 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1107 17:17:50.359738 273963 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1107 17:17:50.359870 273963 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1107 17:17:50.359983 273963 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1107 17:17:50.504499 273963 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1107 17:17:49.760036 265599 addons.go:488] enableAddons completed in 1.028452371s
I1107 17:17:49.988064 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.988085 265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.988096 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387943 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.387964 265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387975 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787240 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.787266 265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787279 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187853 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:51.187885 265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187896 265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:51.187921 265599 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:51.187970 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:51.198604 265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
I1107 17:17:51.198640 265599 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:51.198650 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:51.203228 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:51.204215 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:51.204244 265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
I1107 17:17:51.204255 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:51.389884 265599 system_pods.go:59] 7 kube-system pods found
I1107 17:17:51.389918 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.389923 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.389927 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.389932 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.389936 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.389940 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.389944 265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.389949 265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
I1107 17:17:51.389958 265599 default_sa.go:34] waiting for default service account to be created ...
I1107 17:17:51.587856 265599 default_sa.go:45] found service account: "default"
I1107 17:17:51.587885 265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
I1107 17:17:51.587896 265599 system_pods.go:116] waiting for k8s-apps to be running ...
I1107 17:17:51.791610 265599 system_pods.go:86] 7 kube-system pods found
I1107 17:17:51.791656 265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.791666 265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.791683 265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.791692 265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.791699 265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.791707 265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.791717 265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.791725 265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
I1107 17:17:51.791734 265599 system_svc.go:44] waiting for kubelet service to be running ....
I1107 17:17:51.791785 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:51.802112 265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
I1107 17:17:51.802147 265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1107 17:17:51.802170 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:51.987329 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:51.987365 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:51.987379 265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
I1107 17:17:51.987392 265599 start.go:217] waiting for startup goroutines ...
I1107 17:17:51.987763 265599 ssh_runner.go:195] Run: rm -f paused
I1107 17:17:52.043023 265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
I1107 17:17:52.045707 265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
*
* ==> Docker <==
* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:53 UTC. --
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.867503766Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 708aac62fe16d29b27b7e03823a98eca3e1f022eaaaae07b03b614462c34f61c], retrying...."
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.947874677Z" level=info msg="Removing stale sandbox 08bdce8089e979563c1c35fc2b9cb00ca97ae33cb7c45028d6147314b55324da (6d1abd3e30d792833852b3f43c7effc3075f17e2807dee93ee5437621536102e)"
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.949913639Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 f88327d8868c8ad0f7411a8b72ba2baa71bca468214ef9b295ee84ffe8afcc29], retrying...."
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.982329624Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.027920216Z" level=info msg="Loading containers: done."
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040588525Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040673581Z" level=info msg="Daemon has completed initialization"
Nov 07 17:17:24 pause-171530 systemd[1]: Started Docker Application Container Engine.
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.057458789Z" level=info msg="API listen on [::]:2376"
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.061113478Z" level=info msg="API listen on /var/run/docker.sock"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.523817703Z" level=info msg="ignoring event" container=7c093d736ba0305191d4e798ca0d308583b1c7463ad986b23c2d186951b7d0ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.529123877Z" level=info msg="ignoring event" container=42f2c39561b11166e1cca511011d19541e07606bda37d3d78a6b8d6324edba56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.531551274Z" level=info msg="ignoring event" container=c109021f97b0ec6487f090af18a20062a7df3c8845d39ce8fa8a5e3494da80ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536407618Z" level=info msg="ignoring event" container=bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536463887Z" level=info msg="ignoring event" container=c9629a7195e0926d21d4aebeb78f3778a8379562c623cac143cfd8764639c395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.537822290Z" level=info msg="ignoring event" container=cdc8d9ab8c016ad1726c8ec69dafffa0822704571646314f8f002d64229b9dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.654013442Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.660623628Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.668658992Z" level=error msg="404d7bd895c853d22c917ec8770367d7a91dafd370c7b8959c3253e584e1eb5d cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.671711028Z" level=error msg="9dc3075461e2264f083ac8045d0398e1cb1b95857a3a65126bf2c8178945eb02 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.683178370Z" level=error msg="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730178512Z" level=error msg="1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730223095Z" level=error msg="Handler for POST /v1.40/containers/1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d/start returned error: can't join IPC of container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512: container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 is not running"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.734363189Z" level=error msg="ca313e60699e88a95aade29a7a771b01943787674653d827c9ac778c304b7ee2 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.889125639Z" level=error msg="b6069c474d48724ad6405cac869a299021de19f0e83735250a6669e95f84de98 cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4e27fc3536146 6e38f40d628db 3 seconds ago Running storage-provisioner 0 869229924f7b0
2678c07441af4 5185b96f0becf 17 seconds ago Running coredns 2 c8ae9930fd89e
d128588b435c4 beaaf00edd38a 18 seconds ago Running kube-proxy 3 308cd3b6261d9
fa1fae9e3dd4c 6d23ec0e8b87e 22 seconds ago Running kube-scheduler 3 499d52ff7ec2d
9a2c93b7807eb 0346dbd74bcb9 22 seconds ago Running kube-apiserver 3 ca7019d32208a
c617e5f72b7e0 6039992312758 22 seconds ago Running kube-controller-manager 3 b2be7ef781078
240c58d21dba8 a8a176a5d5d69 22 seconds ago Running etcd 3 af4dddaaaab51
b6069c474d487 5185b96f0becf 25 seconds ago Created coredns 1 cdc8d9ab8c016
9dc3075461e22 0346dbd74bcb9 25 seconds ago Created kube-apiserver 2 c109021f97b0e
404d7bd895c85 6039992312758 25 seconds ago Created kube-controller-manager 2 7c093d736ba03
ca313e60699e8 6d23ec0e8b87e 25 seconds ago Created kube-scheduler 2 42f2c39561b11
1ca6e9485fa8a a8a176a5d5d69 25 seconds ago Created etcd 2 bc4811d3f9f16
d4737d2c0cc12 beaaf00edd38a 25 seconds ago Created kube-proxy 2 c9629a7195e09
*
* ==> coredns [2678c07441af] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = f3fde9de6486f59fe260f641c8b45d450960379ea9d73a7fef0c1feac6c746730bd77c72d2092518703e00d94c78d1eec0c6cb3efcd4dc489238241cea4bf436
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [b6069c474d48] <==
*
*
* ==> describe nodes <==
* Name: pause-171530
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-171530
kubernetes.io/os=linux
minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
minikube.k8s.io/name=pause-171530
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_11_07T17_16_00_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Nov 2022 17:15:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-171530
AcquireTime: <unset>
RenewTime: Mon, 07 Nov 2022 17:17:45 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:17:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: pause-171530
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9
System UUID: 584d8003-5974-4bad-ab15-c1a6d30346fa
Boot ID: 08dd20cb-78b6-4f23-8a31-d42df46571b3
Kernel Version: 5.15.0-1021-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-r6gbf 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 101s
kube-system etcd-pause-171530 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 113s
kube-system kube-apiserver-pause-171530 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 113s
kube-system kube-controller-manager-pause-171530 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 113s
kube-system kube-proxy-627q2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 101s
kube-system kube-scheduler-pause-171530 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 113s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 98s kube-proxy
Normal Starting 17s kube-proxy
Normal NodeHasSufficientPID 2m5s (x4 over 2m5s) kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m5s (x4 over 2m5s) kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m5s (x4 over 2m5s) kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal Starting 113s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 113s kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 113s kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 113s kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeNotReady 113s kubelet Node pause-171530 status is now: NodeNotReady
Normal NodeAllocatableEnforced 113s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 103s kubelet Node pause-171530 status is now: NodeReady
Normal RegisteredNode 101s node-controller Node pause-171530 event: Registered Node pause-171530 in Controller
Normal Starting 23s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 23s (x8 over 23s) kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x8 over 23s) kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x7 over 23s) kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 23s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 5s node-controller Node pause-171530 event: Registered Node pause-171530 in Controller
*
* ==> dmesg <==
* [ +0.004797] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
[ +0.006797] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=0000000007b82556
[ +0.007369] FS-Cache: O-key=[8] '7fa00f0200000000'
[ +0.004936] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006594] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=000000001524e9eb
[ +0.008729] FS-Cache: N-key=[8] '7fa00f0200000000'
[ +0.488901] FS-Cache: Duplicate cookie detected
[ +0.004717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006779] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=000000004d15690e
[ +0.007381] FS-Cache: O-key=[8] '8ea00f0200000000'
[ +0.004952] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006607] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=00000000470ffc24
[ +0.008833] FS-Cache: N-key=[8] '8ea00f0200000000'
[Nov 7 16:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000007] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +1.008285] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000005] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +2.011837] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000035] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000011] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +8.191212] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000044] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[Nov 7 17:14] process 'docker/tmp/qemu-check072764330/check' started with executable stack
*
* ==> etcd [1ca6e9485fa8] <==
*
*
* ==> etcd [240c58d21dba] <==
* {"level":"info","ts":"2022-11-07T17:17:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-171530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2022-11-07T17:17:43.326Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.41473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-r6gbf\" ","response":"range_response_count:1 size:5038"}
{"level":"info","ts":"2022-11-07T17:17:43.326Z","caller":"traceutil/trace.go:171","msg":"trace[1276518897] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-r6gbf; range_end:; response_count:1; response_revision:452; }","duration":"152.549915ms","start":"2022-11-07T17:17:43.174Z","end":"2022-11-07T17:17:43.326Z","steps":["trace[1276518897] 'agreement among raft nodes before linearized reading' (duration: 40.877163ms)","trace[1276518897] 'range keys from in-memory index tree' (duration: 111.462423ms)"],"step_count":2}
*
* ==> kernel <==
* 17:17:53 up 1:00, 0 users, load average: 3.46, 3.53, 2.55
Linux pause-171530 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [9a2c93b7807e] <==
* I1107 17:17:34.912107 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1107 17:17:34.912439 1 controller.go:83] Starting OpenAPI AggregationController
I1107 17:17:34.912469 1 available_controller.go:491] Starting AvailableConditionController
I1107 17:17:34.912477 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1107 17:17:34.912134 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1107 17:17:34.912451 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1107 17:17:34.920676 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1107 17:17:34.912711 1 controller.go:85] Starting OpenAPI controller
I1107 17:17:35.019428 1 shared_informer.go:262] Caches are synced for node_authorizer
I1107 17:17:35.019719 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1107 17:17:35.020233 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1107 17:17:35.019789 1 cache.go:39] Caches are synced for autoregister controller
I1107 17:17:35.020532 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1107 17:17:35.020562 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1107 17:17:35.021059 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1107 17:17:35.038005 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1107 17:17:35.688505 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1107 17:17:35.915960 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1107 17:17:36.540683 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1107 17:17:36.550675 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1107 17:17:36.580888 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1107 17:17:36.641282 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1107 17:17:36.648284 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1107 17:17:47.967954 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1107 17:17:48.027205 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [9dc3075461e2] <==
*
*
* ==> kube-controller-manager [404d7bd895c8] <==
*
*
* ==> kube-controller-manager [c617e5f72b7e] <==
* I1107 17:17:48.008590 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I1107 17:17:48.008778 1 event.go:294] "Event occurred" object="pause-171530" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-171530 event: Registered Node pause-171530 in Controller"
I1107 17:17:48.008734 1 taint_manager.go:209] "Sending events to api server"
W1107 17:17:48.008888 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-171530. Assuming now as a timestamp.
I1107 17:17:48.008920 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I1107 17:17:48.019368 1 shared_informer.go:262] Caches are synced for namespace
I1107 17:17:48.020195 1 shared_informer.go:262] Caches are synced for node
I1107 17:17:48.020224 1 range_allocator.go:166] Starting range CIDR allocator
I1107 17:17:48.020230 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I1107 17:17:48.020269 1 shared_informer.go:262] Caches are synced for cidrallocator
I1107 17:17:48.022046 1 shared_informer.go:262] Caches are synced for expand
I1107 17:17:48.023995 1 shared_informer.go:262] Caches are synced for attach detach
I1107 17:17:48.028885 1 shared_informer.go:262] Caches are synced for daemon sets
I1107 17:17:48.040695 1 shared_informer.go:262] Caches are synced for ReplicationController
I1107 17:17:48.059583 1 shared_informer.go:262] Caches are synced for disruption
I1107 17:17:48.093865 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I1107 17:17:48.094015 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I1107 17:17:48.094994 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I1107 17:17:48.095030 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1107 17:17:48.152712 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1107 17:17:48.181359 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:17:48.224684 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:17:48.538831 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:17:48.624372 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:17:48.624404 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [d128588b435c] <==
* I1107 17:17:35.802654 1 node.go:163] Successfully retrieved node IP: 192.168.85.2
I1107 17:17:35.802795 1 server_others.go:138] "Detected node IP" address="192.168.85.2"
I1107 17:17:35.802838 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1107 17:17:35.823572 1 server_others.go:206] "Using iptables Proxier"
I1107 17:17:35.823628 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1107 17:17:35.823641 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1107 17:17:35.823661 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1107 17:17:35.823700 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:17:35.823862 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:17:35.824181 1 server.go:661] "Version info" version="v1.25.3"
I1107 17:17:35.824201 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:17:35.824705 1 config.go:226] "Starting endpoint slice config controller"
I1107 17:17:35.824729 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1107 17:17:35.824729 1 config.go:317] "Starting service config controller"
I1107 17:17:35.824742 1 shared_informer.go:255] Waiting for caches to sync for service config
I1107 17:17:35.824785 1 config.go:444] "Starting node config controller"
I1107 17:17:35.824797 1 shared_informer.go:255] Waiting for caches to sync for node config
I1107 17:17:35.925677 1 shared_informer.go:262] Caches are synced for node config
I1107 17:17:35.925674 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1107 17:17:35.925738 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [d4737d2c0cc1] <==
*
*
* ==> kube-scheduler [ca313e60699e] <==
*
*
* ==> kube-scheduler [fa1fae9e3dd4] <==
* I1107 17:17:32.057264 1 serving.go:348] Generated self-signed cert in-memory
W1107 17:17:34.927696 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1107 17:17:34.927730 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1107 17:17:34.927742 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1107 17:17:34.927752 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1107 17:17:35.026876 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1107 17:17:35.026910 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:17:35.028404 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1107 17:17:35.032408 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1107 17:17:35.032445 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1107 17:17:35.049814 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1107 17:17:35.150068 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:53 UTC. --
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.471796 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.572533 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.673100 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.773944 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.874639 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.019502 5996 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.020405 5996 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.026112 5996 apiserver.go:52] "Watching apiserver"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.028911 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.029237 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034807 5996 kubelet_node_status.go:108] "Node was previously registered" node="pause-171530"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034917 5996 kubelet_node_status.go:73] "Successfully registered node" node="pause-171530"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044165 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-xtables-lock\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044237 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxrf\" (UniqueName: \"kubernetes.io/projected/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-api-access-xlxrf\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044387 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpcd\" (UniqueName: \"kubernetes.io/projected/4070c2b0-f450-4494-afc9-30615ea8f3c9-kube-api-access-2kpcd\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044450 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-lib-modules\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044482 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4070c2b0-f450-4494-afc9-30615ea8f3c9-config-volume\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044514 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-proxy\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044543 5996 reconciler.go:169] "Reconciler: start to sync state"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.630307 5996 scope.go:115] "RemoveContainer" containerID="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2"
Nov 07 17:17:37 pause-171530 kubelet[5996]: I1107 17:17:37.800520 5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Nov 07 17:17:44 pause-171530 kubelet[5996]: I1107 17:17:44.973868 5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.752701 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934212 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv8z\" (UniqueName: \"kubernetes.io/projected/225d8eea-c00a-46a3-8b89-abb34458db76-kube-api-access-4pv8z\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934319 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/225d8eea-c00a-46a3-8b89-abb34458db76-tmp\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [4e27fc353614] <==
* I1107 17:17:50.349388 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1107 17:17:50.361550 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1107 17:17:50.361616 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1107 17:17:50.369430 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1107 17:17:50.369585 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"892faada-f17d-4afd-8626-0abe858770d6", APIVersion:"v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb became leader
I1107 17:17:50.369661 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
I1107 17:17:50.470629 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-171530 -n pause-171530
=== CONT TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:261: (dbg) Run: kubectl --context pause-171530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-171530 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-171530 describe pod : exit status 1 (59.960205ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-171530 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect pause-171530
helpers_test.go:235: (dbg) docker inspect pause-171530:
-- stdout --
[
{
"Id": "e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550",
"Created": "2022-11-07T17:15:38.935447727Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 241803,
"ExitCode": 0,
"Error": "",
"StartedAt": "2022-11-07T17:15:39.387509554Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
"ResolvConfPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hostname",
"HostsPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/hosts",
"LogPath": "/var/lib/docker/containers/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550/e3da1593738710d2101528ac5cc35b650e85f05463944c9b69522e36bd3d9550-json.log",
"Name": "/pause-171530",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"pause-171530:/var",
"/lib/modules:/lib/modules:ro"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "pause-171530",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 2147483648,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 4294967296,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886-init/diff:/var/lib/docker/overlay2/2fd1fc00a589bf61b81b15f5596b1c421509b0ed94a0073de8df35851e0104fd/diff:/var/lib/docker/overlay2/ca94f1e5c7c58ab040213044ce029a51c1ea19ec2ae58d30e36b7c461dac5b75/diff:/var/lib/docker/overlay2/e42a9a60bb0ccca9f6ebc3bec24f638bafba48d604bd99af2d43cee1225c9466/diff:/var/lib/docker/overlay2/3474eef000daf16045ddcd082155e02d3adc432e026d93a79f6650da6b7bbe2c/diff:/var/lib/docker/overlay2/2c37502622a619527bab9f0e94b3c9e8ea823ff6ffdc84760dfeca0a7a1d2ba9/diff:/var/lib/docker/overlay2/c89ceddb787dc6015274fbee4e47c019bcb7637c523d5d053aafccc75f2d8c5b/diff:/var/lib/docker/overlay2/d13aa31ebe50e77225149ff2f5361d34b4b4dcbeb3b0bc0a15e35f3d4a8b7756/diff:/var/lib/docker/overlay2/c95f6f4ff58fc27002c40206891dabcbf4ed1b39c8f3584432f15b72a15920c1/diff:/var/lib/docker/overlay2/609367ca657fad1a480fd0d0075ab9d34c5556928b3f753bf75b7937a8b74ee8/diff:/var/lib/docker/overlay2/02a742
81aea9f2e787ac6f6c4ac9f7d01ae11e33439e4787dff010ca49918d6b/diff:/var/lib/docker/overlay2/97be1349403116decda81fc5f089a2db445d4c5a72b26e4fa1d2d69bc8f5b867/diff:/var/lib/docker/overlay2/0a0a5163f70151b385895e742fd238ec8e8e4f76def9c619677619db2a6d5b08/diff:/var/lib/docker/overlay2/5659ee0023498bf40cbbec8f9a2f0fddfc95419655c96d6605a451a2c46c6036/diff:/var/lib/docker/overlay2/490c47e44446d2723d18ba6ae67ce415128dbc5fd055c8b0c3af734b0a072691/diff:/var/lib/docker/overlay2/303dd4de2e78ffebe2a8b0327ff89f434f0d94efec1239397b26f584669c6688/diff:/var/lib/docker/overlay2/57cd5e60d0e6efc4eba5b1d3312be411722b2dbe779b38d7e29451cb53536ed6/diff:/var/lib/docker/overlay2/ebe05a325862fb9343e31e938f8b0cbebb9eac74b601c1cbd7c51d82932d20b4/diff:/var/lib/docker/overlay2/8536312e6228bdf272e430339824f16762dc9bb32d3fbcd5a2704ed1cbd37e64/diff:/var/lib/docker/overlay2/2598be8b2bb739fc75e87aee71f5af665456fffb16f599676335c74f15ae6391/diff:/var/lib/docker/overlay2/4d2d35e9d340ea3932b4095e279f70853bcd0793bb323921891c0c769627f2c5/diff:/var/lib/d
ocker/overlay2/4d826174051f4f89d8c7f9e2a1c0deeedf4fe1375b7e4805b1507830dfcb85eb/diff:/var/lib/docker/overlay2/04619ad2580acc4047033104b728374c0bcab41b326af981fd92107ded6f8715/diff:/var/lib/docker/overlay2/653c7b7d9b3ff747507ce6d4c8750195142e3c1e5dd8776d1f5ad68da192b0c3/diff:/var/lib/docker/overlay2/7feba1b41892a093a69f3006a5955540f607a8c16986fd594da627470dc20b50/diff:/var/lib/docker/overlay2/edfa060eb3735b8c7368bfa84da65c47f0381d016fcb1f23338cbe984ffb4309/diff:/var/lib/docker/overlay2/7bc7096889faa87a4f3542932b25941d0cb3ebdca2eb7a8323c0b437c946ca84/diff:/var/lib/docker/overlay2/6d9c19e156f90bc4ce093d160661251be6f95a51a9e0712f2a79c6a08cd996cd/diff:/var/lib/docker/overlay2/f5ba9cd7997e8cdfc6fb27c76c069767b07cc8201e7e0ef7c1a3ffa443525fb1/diff:/var/lib/docker/overlay2/43277eab35f847188e2fbacd196549314d6463948690b6eb7218cfe6ecc19b17/diff:/var/lib/docker/overlay2/ef090d552b4022f86d7bdf79bbc298e347a3e535c804f65b2d33683e0864901d/diff:/var/lib/docker/overlay2/8ef9f5644e2d99ddd144a8c44988dff320901634fa10fdd2ceb63b44464
942d2/diff:/var/lib/docker/overlay2/8db604496435b1f4a13ceca647b7f365eccc2122c46c001b46d3343020dce882/diff:/var/lib/docker/overlay2/aa63ff25f14d23e22d30a5f6ffdca4dc610d3a56fda7fcf8128955229e8179ac/diff:/var/lib/docker/overlay2/d8e836f399115dec3f57c3bdae8cfe9459ca00fb4db1619f7c32a54c17f2696a/diff:/var/lib/docker/overlay2/e8706f9f543307c51f76840c008a49519273628b367c558c81472382319ee067/diff:/var/lib/docker/overlay2/410562df42124ab024d1aed6c452424839223794de2fac149e33e3a2aaad7db5/diff:/var/lib/docker/overlay2/24ba0b84d34cf83f31c6e6420465d970cd940052bc918b875c8320dfbeccb3fc/diff:/var/lib/docker/overlay2/cfd31a3b8ba33133312104bac0d05c9334975dd18cb3dfff6ba901668d8935cb/diff:/var/lib/docker/overlay2/2bfc0a7a2746e54d77a9a1838e077ca17b8bd024966ed7fc7f4cfceffc1e41c9/diff:/var/lib/docker/overlay2/67ae264c7fe2b9c7f659d1bbdccdc178c34230e3b6aa815b7f3ff24d50f1ca5a/diff:/var/lib/docker/overlay2/2f921d0a0caaca67918401f3f9b193c0e89b931f174e447a79ba82b2a5743c6e/diff:/var/lib/docker/overlay2/8f6f97c7885b0f2745adf21261ead041f0b7ce
88d0ab325cfafd1cf3b9aa07f3/diff",
"MergedDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/merged",
"UpperDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/diff",
"WorkDir": "/var/lib/docker/overlay2/351a486d70761b8dfb0f7d28c2e91c06707ad1dc95a3d03a5a488d1d96092886/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "pause-171530",
"Source": "/var/lib/docker/volumes/pause-171530/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "pause-171530",
"Domainname": "",
"User": "root",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "pause-171530",
"name.minikube.sigs.k8s.io": "pause-171530",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "a9adb1a46308a44769722d4564542b00b60699767153f3cfdcf9adf8a13796ed",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49369"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49368"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49365"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49367"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "49366"
}
]
},
"SandboxKey": "/var/run/docker/netns/a9adb1a46308",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"pause-171530": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": [
"e3da15937387",
"pause-171530"
],
"NetworkID": "39ab6118a516dd29e38bb2d528840c29808f0aaff829c163fb133591392f975d",
"EndpointID": "f05b8ecc16b4a46e2d24102363dbe97c03cc31d021c5d068a263b87ac53329f9",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p pause-171530 -n pause-171530
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p pause-171530 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-171530 logs -n 25: (1.167744437s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs:
-- stdout --
*
* ==> Audit <==
* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| ssh | cert-options-171318 ssh | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-171318 -- sudo | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-171318 | cert-options-171318 | jenkins | v1.28.0 | 07 Nov 22 17:13 UTC | 07 Nov 22 17:13 UTC |
| ssh | docker-flags-171335 ssh | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-171335 ssh | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-171335 | docker-flags-171335 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| start | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:14 UTC |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.16.0 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p missing-upgrade-171351 | missing-upgrade-171351 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p stopped-upgrade-171343 | stopped-upgrade-171343 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| stop | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:14 UTC | 07 Nov 22 17:15 UTC |
| delete | -p stopped-upgrade-171343 | stopped-upgrade-171343 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
| start | -p kubernetes-upgrade-171418 | kubernetes-upgrade-171418 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | |
| | --memory=2200 | | | | | |
| | --kubernetes-version=v1.25.3 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p missing-upgrade-171351 | missing-upgrade-171351 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:15 UTC |
| start | -p pause-171530 --memory=2048 | pause-171530 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:17 UTC |
| | --install-addons=false | | | | | |
| | --wait=all --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p cert-expiration-171219 | cert-expiration-171219 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p running-upgrade-171507 | running-upgrade-171507 | jenkins | v1.28.0 | 07 Nov 22 17:15 UTC | 07 Nov 22 17:16 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p running-upgrade-171507 | running-upgrade-171507 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
| start | -p auto-171300 --memory=2048 | auto-171300 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p cert-expiration-171219 | cert-expiration-171219 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:16 UTC |
| start | -p kindnet-171300 | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:16 UTC | 07 Nov 22 17:17 UTC |
| | --memory=2048 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=kindnet --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| start | -p pause-171530 | pause-171530 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | --alsologtostderr | | | | | |
| | -v=1 --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p kindnet-171300 pgrep -a | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | kubelet | | | | | |
| delete | -p kindnet-171300 | kindnet-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| start | -p cilium-171301 --memory=2048 | cilium-171301 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | |
| | --alsologtostderr | | | | | |
| | --wait=true --wait-timeout=5m | | | | | |
| | --cni=cilium --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | -p auto-171300 pgrep -a | auto-171300 | jenkins | v1.28.0 | 07 Nov 22 17:17 UTC | 07 Nov 22 17:17 UTC |
| | kubelet | | | | | |
|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
*
* ==> Last Start <==
* Log file created at: 2022/11/07 17:17:39
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.19.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1107 17:17:39.909782 273963 out.go:296] Setting OutFile to fd 1 ...
I1107 17:17:39.909910 273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:39.909920 273963 out.go:309] Setting ErrFile to fd 2...
I1107 17:17:39.909925 273963 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1107 17:17:39.910036 273963 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15310-3679/.minikube/bin
I1107 17:17:39.910611 273963 out.go:303] Setting JSON to false
I1107 17:17:39.912756 273963 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3611,"bootTime":1667837849,"procs":1171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1107 17:17:39.912825 273963 start.go:126] virtualization: kvm guest
I1107 17:17:39.916343 273963 out.go:177] * [cilium-171301] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
I1107 17:17:39.918167 273963 out.go:177] - MINIKUBE_LOCATION=15310
I1107 17:17:39.918122 273963 notify.go:220] Checking for updates...
I1107 17:17:39.919930 273963 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1107 17:17:39.921709 273963 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:39.923329 273963 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/15310-3679/.minikube
I1107 17:17:39.924851 273963 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1107 17:17:39.927024 273963 config.go:180] Loaded profile config "auto-171300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927142 273963 config.go:180] Loaded profile config "kubernetes-upgrade-171418": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927235 273963 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:39.927287 273963 driver.go:365] Setting default libvirt URI to qemu:///system
I1107 17:17:39.959963 273963 docker.go:137] docker version: linux-20.10.21
I1107 17:17:39.960043 273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:40.066046 273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:39.981648038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:40.066199 273963 docker.go:254] overlay module found
I1107 17:17:40.069246 273963 out.go:177] * Using the docker driver based on user configuration
I1107 17:17:40.070821 273963 start.go:282] selected driver: docker
I1107 17:17:40.070848 273963 start.go:808] validating driver "docker" against <nil>
I1107 17:17:40.070871 273963 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1107 17:17:40.072076 273963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1107 17:17:40.184024 273963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-11-07 17:17:40.095572549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1021-gcp OperatingSystem:Ubuntu 20.04.5 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33660669952 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:20.10.21 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c90a442489720eec95342e1789ee8a5e1b9536f Expected:1c90a442489720eec95342e1789ee8a5e1b9536f} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1-docker] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
I1107 17:17:40.184162 273963 start_flags.go:303] no existing cluster config was found, will generate one from the flags
I1107 17:17:40.184327 273963 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1107 17:17:40.186905 273963 out.go:177] * Using Docker driver with root privileges
I1107 17:17:40.188888 273963 cni.go:95] Creating CNI manager for "cilium"
I1107 17:17:40.188919 273963 start_flags.go:312] Found "Cilium" CNI - setting NetworkPlugin=cni
I1107 17:17:40.188929 273963 start_flags.go:317] config:
{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:
cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:40.191042 273963 out.go:177] * Starting control plane node cilium-171301 in cluster cilium-171301
I1107 17:17:40.192756 273963 cache.go:120] Beginning downloading kic base image for docker with docker
I1107 17:17:40.194622 273963 out.go:177] * Pulling base image ...
I1107 17:17:40.196366 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:40.196424 273963 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
I1107 17:17:40.196439 273963 cache.go:57] Caching tarball of preloaded images
I1107 17:17:40.196478 273963 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
I1107 17:17:40.196755 273963 preload.go:174] Found /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1107 17:17:40.196770 273963 cache.go:60] Finished verifying existence of preloaded tar for v1.25.3 on docker
I1107 17:17:40.196994 273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
I1107 17:17:40.197037 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json: {Name:mke8d5318de654621f86e157b3b792411142e89b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:40.226030 273963 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
I1107 17:17:40.226064 273963 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
I1107 17:17:40.226085 273963 cache.go:208] Successfully downloaded all kic artifacts
I1107 17:17:40.226119 273963 start.go:364] acquiring machines lock for cilium-171301: {Name:mk73a4f694f74dc8530831944bb92040f98c814b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1107 17:17:40.226272 273963 start.go:368] acquired machines lock for "cilium-171301" in 128.513µs
I1107 17:17:40.226338 273963 start.go:93] Provisioning new machine with config: &{Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 17:17:40.226851 273963 start.go:125] createHost starting for "" (driver="docker")
I1107 17:17:35.925106 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:35.931883 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1107 17:17:35.931924 265599 api_server.go:102] status: https://192.168.85.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1107 17:17:36.424461 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:36.430147 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:36.437609 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:36.437636 265599 api_server.go:130] duration metric: took 4.709684273s to wait for apiserver health ...
I1107 17:17:36.437645 265599 cni.go:95] Creating CNI manager for ""
I1107 17:17:36.437652 265599 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
I1107 17:17:36.437659 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:36.447744 265599 system_pods.go:59] 6 kube-system pods found
I1107 17:17:36.447788 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1107 17:17:36.447801 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1107 17:17:36.447812 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1107 17:17:36.447823 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1107 17:17:36.447833 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1107 17:17:36.447851 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:36.447860 265599 system_pods.go:74] duration metric: took 10.195758ms to wait for pod list to return data ...
I1107 17:17:36.447873 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:36.452085 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:36.452127 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:36.452142 265599 node_conditions.go:105] duration metric: took 4.263555ms to run NodePressure ...
I1107 17:17:36.452169 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1107 17:17:36.655569 265599 kubeadm.go:763] waiting for restarted kubelet to initialise ...
I1107 17:17:36.659806 265599 kubeadm.go:778] kubelet initialised
I1107 17:17:36.659830 265599 kubeadm.go:779] duration metric: took 4.236781ms waiting for restarted kubelet to initialise ...
I1107 17:17:36.659837 265599 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:36.664724 265599 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:38.678405 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:39.764430 254808 pod_ready.go:92] pod "coredns-565d847f94-zscpb" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.764470 254808 pod_ready.go:81] duration metric: took 37.51089729s waiting for pod "coredns-565d847f94-zscpb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.764489 254808 pod_ready.go:78] waiting up to 5m0s for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.769704 254808 pod_ready.go:92] pod "etcd-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.769729 254808 pod_ready.go:81] duration metric: took 5.228844ms waiting for pod "etcd-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.769741 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.774830 254808 pod_ready.go:92] pod "kube-apiserver-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.774850 254808 pod_ready.go:81] duration metric: took 5.101563ms waiting for pod "kube-apiserver-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.774863 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.779742 254808 pod_ready.go:92] pod "kube-controller-manager-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.779767 254808 pod_ready.go:81] duration metric: took 4.895957ms waiting for pod "kube-controller-manager-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.779780 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.787718 254808 pod_ready.go:92] pod "kube-proxy-5hjzb" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:39.787745 254808 pod_ready.go:81] duration metric: took 7.956771ms waiting for pod "kube-proxy-5hjzb" in "kube-system" namespace to be "Ready" ...
I1107 17:17:39.787759 254808 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:40.161780 254808 pod_ready.go:92] pod "kube-scheduler-auto-171300" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:40.161804 254808 pod_ready.go:81] duration metric: took 374.038459ms waiting for pod "kube-scheduler-auto-171300" in "kube-system" namespace to be "Ready" ...
I1107 17:17:40.161812 254808 pod_ready.go:38] duration metric: took 39.930959656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:40.161836 254808 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:40.161880 254808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:40.174326 254808 api_server.go:71] duration metric: took 40.098096653s to wait for apiserver process to appear ...
I1107 17:17:40.174356 254808 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:40.174385 254808 api_server.go:252] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I1107 17:17:40.180459 254808 api_server.go:278] https://192.168.94.2:8443/healthz returned 200:
ok
I1107 17:17:40.181698 254808 api_server.go:140] control plane version: v1.25.3
I1107 17:17:40.181729 254808 api_server.go:130] duration metric: took 7.366556ms to wait for apiserver health ...
I1107 17:17:40.181739 254808 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:40.365251 254808 system_pods.go:59] 7 kube-system pods found
I1107 17:17:40.365291 254808 system_pods.go:61] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
I1107 17:17:40.365298 254808 system_pods.go:61] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
I1107 17:17:40.365305 254808 system_pods.go:61] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
I1107 17:17:40.365313 254808 system_pods.go:61] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
I1107 17:17:40.365320 254808 system_pods.go:61] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
I1107 17:17:40.365326 254808 system_pods.go:61] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
I1107 17:17:40.365341 254808 system_pods.go:61] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
I1107 17:17:40.365353 254808 system_pods.go:74] duration metric: took 183.607113ms to wait for pod list to return data ...
I1107 17:17:40.365368 254808 default_sa.go:34] waiting for default service account to be created ...
I1107 17:17:40.561571 254808 default_sa.go:45] found service account: "default"
I1107 17:17:40.561596 254808 default_sa.go:55] duration metric: took 196.218934ms for default service account to be created ...
I1107 17:17:40.561604 254808 system_pods.go:116] waiting for k8s-apps to be running ...
I1107 17:17:40.765129 254808 system_pods.go:86] 7 kube-system pods found
I1107 17:17:40.765166 254808 system_pods.go:89] "coredns-565d847f94-zscpb" [a8e008dc-4166-4449-8182-2d5998d7e35a] Running
I1107 17:17:40.765200 254808 system_pods.go:89] "etcd-auto-171300" [b26c6dee-c57a-4455-bf34-57e8d4bdae28] Running
I1107 17:17:40.765210 254808 system_pods.go:89] "kube-apiserver-auto-171300" [9702725f-76a4-4828-ba51-3bd1bd31c921] Running
I1107 17:17:40.765218 254808 system_pods.go:89] "kube-controller-manager-auto-171300" [a2722655-640b-4f80-8ecc-0cb3abbc73e1] Running
I1107 17:17:40.765225 254808 system_pods.go:89] "kube-proxy-5hjzb" [e3111b6a-3730-47f4-b80e-fa872011b18d] Running
I1107 17:17:40.765231 254808 system_pods.go:89] "kube-scheduler-auto-171300" [49b194d9-1c66-4db1-964c-72958b48a969] Running
I1107 17:17:40.765237 254808 system_pods.go:89] "storage-provisioner" [af36ca23-ffa5-4472-b090-7e646b93034c] Running
I1107 17:17:40.765245 254808 system_pods.go:126] duration metric: took 203.635578ms to wait for k8s-apps to be running ...
I1107 17:17:40.765255 254808 system_svc.go:44] waiting for kubelet service to be running ....
I1107 17:17:40.765298 254808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:40.776269 254808 system_svc.go:56] duration metric: took 11.004445ms WaitForService to wait for kubelet.
I1107 17:17:40.776304 254808 kubeadm.go:573] duration metric: took 40.700080633s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1107 17:17:40.776325 254808 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:40.962904 254808 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:40.962940 254808 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:40.962955 254808 node_conditions.go:105] duration metric: took 186.624576ms to run NodePressure ...
I1107 17:17:40.962972 254808 start.go:217] waiting for startup goroutines ...
I1107 17:17:40.963411 254808 ssh_runner.go:195] Run: rm -f paused
I1107 17:17:41.016064 254808 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
I1107 17:17:41.019135 254808 out.go:177] * Done! kubectl is now configured to use "auto-171300" cluster and "default" namespace by default
I1107 17:17:38.938491 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:38.966502 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:38.966589 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:38.992316 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:38.992406 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:39.018933 233006 logs.go:274] 0 containers: []
W1107 17:17:39.018962 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:39.019012 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:39.046418 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:39.046497 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:39.072173 233006 logs.go:274] 0 containers: []
W1107 17:17:39.072208 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:39.072257 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:39.098237 233006 logs.go:274] 0 containers: []
W1107 17:17:39.098266 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:39.098309 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:39.124960 233006 logs.go:274] 0 containers: []
W1107 17:17:39.124989 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:39.125038 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:39.153502 233006 logs.go:274] 3 containers: [8891a1b14e04 1c2c98a4c31a 371287b3c0c6]
I1107 17:17:39.153554 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:39.153570 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:39.193713 233006 logs.go:123] Gathering logs for kube-controller-manager [1c2c98a4c31a] ...
I1107 17:17:39.193770 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c2c98a4c31a"
I1107 17:17:39.222940 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:39.222968 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:39.264980 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:39.265019 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:39.306266 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:39.306303 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:39.375563 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:39.375608 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:39.446970 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:39.446997 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:39.447010 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:39.478856 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:39.478893 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:39.551509 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:39.551552 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:39.588201 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:39.588235 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:39.622485 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:39.622531 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:39.711503 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:39.711531 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:39.746571 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:39.746605 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:42.339399 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:42.339827 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:42.439058 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:42.465860 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:42.465945 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:42.503349 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:42.503419 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:42.529180 233006 logs.go:274] 0 containers: []
W1107 17:17:42.529209 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:42.529272 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:42.556348 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:42.556424 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:42.585423 233006 logs.go:274] 0 containers: []
W1107 17:17:42.585457 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:42.585514 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:42.612694 233006 logs.go:274] 0 containers: []
W1107 17:17:42.612730 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:42.612806 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:42.638513 233006 logs.go:274] 0 containers: []
W1107 17:17:42.638534 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:42.638584 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:42.666063 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:42.666121 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:42.666139 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:42.683133 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:42.683163 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:42.718461 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:42.718496 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:42.752314 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:42.752340 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:42.774285 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:42.774322 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:42.808596 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:42.808627 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:42.886659 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:42.886698 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:42.960618 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:42.960656 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:42.960670 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:43.002805 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:43.002858 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:43.082429 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:43.082467 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:43.115843 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:43.115911 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:43.190735 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:43.190775 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:40.229568 273963 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
I1107 17:17:40.229875 273963 start.go:159] libmachine.API.Create for "cilium-171301" (driver="docker")
I1107 17:17:40.229916 273963 client.go:168] LocalClient.Create starting
I1107 17:17:40.230045 273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem
I1107 17:17:40.230090 273963 main.go:134] libmachine: Decoding PEM data...
I1107 17:17:40.230115 273963 main.go:134] libmachine: Parsing certificate...
I1107 17:17:40.230183 273963 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem
I1107 17:17:40.230204 273963 main.go:134] libmachine: Decoding PEM data...
I1107 17:17:40.230217 273963 main.go:134] libmachine: Parsing certificate...
I1107 17:17:40.230581 273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1107 17:17:40.255766 273963 cli_runner.go:211] docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1107 17:17:40.255850 273963 network_create.go:272] running [docker network inspect cilium-171301] to gather additional debugging logs...
I1107 17:17:40.255875 273963 cli_runner.go:164] Run: docker network inspect cilium-171301
W1107 17:17:40.279408 273963 cli_runner.go:211] docker network inspect cilium-171301 returned with exit code 1
I1107 17:17:40.279440 273963 network_create.go:275] error running [docker network inspect cilium-171301]: docker network inspect cilium-171301: exit status 1
stdout:
[]
stderr:
Error: No such network: cilium-171301
I1107 17:17:40.279451 273963 network_create.go:277] output of [docker network inspect cilium-171301]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: cilium-171301
** /stderr **
I1107 17:17:40.279494 273963 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:17:40.309079 273963 network.go:246] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-aa8bc6b4377d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:f9:4a:a0:7f}}
I1107 17:17:40.309777 273963 network.go:246] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-46185e74412a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:c3:83:d6}}
I1107 17:17:40.310466 273963 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc0004bc5f8] misses:0}
I1107 17:17:40.310501 273963 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1107 17:17:40.310513 273963 network_create.go:115] attempt to create docker network cilium-171301 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1107 17:17:40.310578 273963 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cilium-171301 cilium-171301
I1107 17:17:40.390589 273963 network_create.go:99] docker network cilium-171301 192.168.67.0/24 created
I1107 17:17:40.390635 273963 kic.go:106] calculated static IP "192.168.67.2" for the "cilium-171301" container
I1107 17:17:40.390704 273963 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1107 17:17:40.426276 273963 cli_runner.go:164] Run: docker volume create cilium-171301 --label name.minikube.sigs.k8s.io=cilium-171301 --label created_by.minikube.sigs.k8s.io=true
I1107 17:17:40.452601 273963 oci.go:103] Successfully created a docker volume cilium-171301
I1107 17:17:40.452735 273963 cli_runner.go:164] Run: docker run --rm --name cilium-171301-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --entrypoint /usr/bin/test -v cilium-171301:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
I1107 17:17:41.261517 273963 oci.go:107] Successfully prepared a docker volume cilium-171301
I1107 17:17:41.261565 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:41.261584 273963 kic.go:179] Starting extracting preloaded images to volume ...
I1107 17:17:41.261639 273963 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
I1107 17:17:44.552998 273963 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/15310-3679/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v cilium-171301:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (3.291298492s)
I1107 17:17:44.553029 273963 kic.go:188] duration metric: took 3.291442 seconds to extract preloaded images to volume
W1107 17:17:44.553206 273963 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1107 17:17:44.553333 273963 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1107 17:17:44.659014 273963 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cilium-171301 --name cilium-171301 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cilium-171301 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cilium-171301 --network cilium-171301 --ip 192.168.67.2 --volume cilium-171301:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
I1107 17:17:40.678711 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:42.751499 265599 pod_ready.go:102] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"False"
I1107 17:17:45.178920 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:45.178953 265599 pod_ready.go:81] duration metric: took 8.514203128s waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:45.178969 265599 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190344 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:47.190385 265599 pod_ready.go:81] duration metric: took 2.011408194s waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:47.190401 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703190 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.703227 265599 pod_ready.go:81] duration metric: took 1.512816405s waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.703241 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708302 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.708326 265599 pod_ready.go:81] duration metric: took 5.077395ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.708335 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713353 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.713373 265599 pod_ready.go:81] duration metric: took 5.032187ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.713382 265599 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718276 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:48.718298 265599 pod_ready.go:81] duration metric: took 4.909784ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.718308 265599 pod_ready.go:38] duration metric: took 12.058462568s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.718326 265599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1107 17:17:48.725688 265599 ops.go:34] apiserver oom_adj: -16
I1107 17:17:48.725713 265599 kubeadm.go:631] restartCluster took 23.70983267s
I1107 17:17:48.725723 265599 kubeadm.go:398] StartCluster complete in 23.739715552s
I1107 17:17:48.725742 265599 settings.go:142] acquiring lock: {Name:mke91789b0d6e4070893f671805542745cc27d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.725827 265599 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/15310-3679/kubeconfig
I1107 17:17:48.727240 265599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/kubeconfig: {Name:mk0b702cd34f333a37178f1520735cf3ce85aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:48.728367 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.731431 265599 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-171530" rescaled to 1
I1107 17:17:48.731509 265599 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1107 17:17:48.735381 265599 out.go:177] * Verifying Kubernetes components...
I1107 17:17:45.728936 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:45.729307 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:45.938905 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:45.968231 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:45.968310 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:45.995241 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:45.995316 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:46.024313 233006 logs.go:274] 0 containers: []
W1107 17:17:46.024343 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:46.024394 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:46.054216 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:46.054293 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:46.088627 233006 logs.go:274] 0 containers: []
W1107 17:17:46.088662 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:46.088710 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:46.116330 233006 logs.go:274] 0 containers: []
W1107 17:17:46.116365 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:46.116420 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:46.150637 233006 logs.go:274] 0 containers: []
W1107 17:17:46.150668 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:46.150771 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:46.182148 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:46.182207 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:46.182221 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:46.204275 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:46.204315 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:46.244475 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:46.244515 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:46.337500 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:46.337547 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:46.384737 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:46.384774 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:46.405735 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:46.405772 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:46.443740 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:46.443780 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:46.515276 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:46.515311 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:46.550260 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:46.550314 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:46.632884 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:46.632921 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:46.667751 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:46.667787 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:46.701085 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:46.701121 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:46.780102 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:48.731563 265599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1107 17:17:48.731586 265599 addons.go:486] enableAddons start: toEnable=map[], additional=[]
I1107 17:17:48.731727 265599 config.go:180] Loaded profile config "pause-171530": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:48.737019 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:48.737075 265599 addons.go:65] Setting default-storageclass=true in profile "pause-171530"
I1107 17:17:48.737103 265599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-171530"
I1107 17:17:48.737073 265599 addons.go:65] Setting storage-provisioner=true in profile "pause-171530"
I1107 17:17:48.737183 265599 addons.go:227] Setting addon storage-provisioner=true in "pause-171530"
W1107 17:17:48.737191 265599 addons.go:236] addon storage-provisioner should already be in state true
I1107 17:17:48.737247 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.737345 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.737690 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.748838 265599 node_ready.go:35] waiting up to 6m0s for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755501 265599 node_ready.go:49] node "pause-171530" has status "Ready":"True"
I1107 17:17:48.755530 265599 node_ready.go:38] duration metric: took 6.650143ms waiting for node "pause-171530" to be "Ready" ...
I1107 17:17:48.755544 265599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:48.774070 265599 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1107 17:17:45.119361 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Running}}
I1107 17:17:45.160545 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.191402 273963 cli_runner.go:164] Run: docker exec cilium-171301 stat /var/lib/dpkg/alternatives/iptables
I1107 17:17:45.267825 273963 oci.go:144] the created container "cilium-171301" has a running status.
I1107 17:17:45.267856 273963 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa...
I1107 17:17:45.381762 273963 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1107 17:17:45.520399 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.581314 273963 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1107 17:17:45.581340 273963 kic_runner.go:114] Args: [docker exec --privileged cilium-171301 chown docker:docker /home/docker/.ssh/authorized_keys]
I1107 17:17:45.671973 273963 cli_runner.go:164] Run: docker container inspect cilium-171301 --format={{.State.Status}}
I1107 17:17:45.703596 273963 machine.go:88] provisioning docker machine ...
I1107 17:17:45.703639 273963 ubuntu.go:169] provisioning hostname "cilium-171301"
I1107 17:17:45.703689 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:45.732869 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:45.733123 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:45.733143 273963 main.go:134] libmachine: About to run SSH command:
sudo hostname cilium-171301 && echo "cilium-171301" | sudo tee /etc/hostname
I1107 17:17:45.878648 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: cilium-171301
I1107 17:17:45.878766 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:45.906394 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:45.906551 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:45.906570 273963 main.go:134] libmachine: About to run SSH command:
if ! grep -xq '.*\scilium-171301' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cilium-171301/g' /etc/hosts;
else
echo '127.0.1.1 cilium-171301' | sudo tee -a /etc/hosts;
fi
fi
I1107 17:17:46.027393 273963 main.go:134] libmachine: SSH cmd err, output: <nil>:
I1107 17:17:46.027440 273963 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/15310-3679/.minikube CaCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15310-3679/.minikube}
I1107 17:17:46.027464 273963 ubuntu.go:177] setting up certificates
I1107 17:17:46.027474 273963 provision.go:83] configureAuth start
I1107 17:17:46.027538 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:46.061281 273963 provision.go:138] copyHostCerts
I1107 17:17:46.061348 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem, removing ...
I1107 17:17:46.061366 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem
I1107 17:17:46.061441 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/ca.pem (1082 bytes)
I1107 17:17:46.061560 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem, removing ...
I1107 17:17:46.061575 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem
I1107 17:17:46.061617 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/cert.pem (1123 bytes)
I1107 17:17:46.061749 273963 exec_runner.go:144] found /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem, removing ...
I1107 17:17:46.061764 273963 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem
I1107 17:17:46.061801 273963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15310-3679/.minikube/key.pem (1675 bytes)
I1107 17:17:46.061863 273963 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem org=jenkins.cilium-171301 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube cilium-171301]
I1107 17:17:46.253924 273963 provision.go:172] copyRemoteCerts
I1107 17:17:46.253999 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1107 17:17:46.254047 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.296985 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:46.384442 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1107 17:17:46.404309 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
I1107 17:17:46.427506 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1107 17:17:46.449504 273963 provision.go:86] duration metric: configureAuth took 422.011748ms
I1107 17:17:46.449540 273963 ubuntu.go:193] setting minikube options for container-runtime
I1107 17:17:46.449738 273963 config.go:180] Loaded profile config "cilium-171301": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
I1107 17:17:46.449813 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.481398 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.481541 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.481555 273963 main.go:134] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1107 17:17:46.599328 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
I1107 17:17:46.599354 273963 ubuntu.go:71] root file system type: overlay
I1107 17:17:46.599539 273963 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I1107 17:17:46.599598 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.629056 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.629241 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.629343 273963 main.go:134] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1107 17:17:46.770161 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I1107 17:17:46.770248 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:46.799041 273963 main.go:134] libmachine: Using SSH client type: native
I1107 17:17:46.799188 273963 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil> [] 0s} 127.0.0.1 49384 <nil> <nil>}
I1107 17:17:46.799207 273963 main.go:134] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1107 17:17:47.547232 273963 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2022-10-18 18:18:12.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2022-11-07 17:17:46.766442749 +0000
@@ -1,30 +1,32 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target docker.socket firewalld.service containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
Wants=network-online.target
-Requires=docker.socket containerd.service
+Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
+Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
@@ -32,16 +34,16 @@
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I1107 17:17:47.547272 273963 machine.go:91] provisioned docker machine in 1.84364984s
I1107 17:17:47.547283 273963 client.go:171] LocalClient.Create took 7.317360133s
I1107 17:17:47.547304 273963 start.go:167] duration metric: libmachine.API.Create for "cilium-171301" took 7.317430541s
I1107 17:17:47.547312 273963 start.go:300] post-start starting for "cilium-171301" (driver="docker")
I1107 17:17:47.547320 273963 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1107 17:17:47.547382 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1107 17:17:47.547424 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.580680 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.670961 273963 ssh_runner.go:195] Run: cat /etc/os-release
I1107 17:17:47.674334 273963 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1107 17:17:47.674370 273963 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1107 17:17:47.674379 273963 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1107 17:17:47.674385 273963 info.go:137] Remote host: Ubuntu 20.04.5 LTS
I1107 17:17:47.674395 273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/addons for local assets ...
I1107 17:17:47.674457 273963 filesync.go:126] Scanning /home/jenkins/minikube-integration/15310-3679/.minikube/files for local assets ...
I1107 17:17:47.674531 273963 filesync.go:149] local asset: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem -> 101292.pem in /etc/ssl/certs
I1107 17:17:47.674630 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1107 17:17:47.682576 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:47.702345 273963 start.go:303] post-start completed in 155.016776ms
I1107 17:17:47.702863 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:47.729269 273963 profile.go:148] Saving config to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/config.json ...
I1107 17:17:47.729653 273963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1107 17:17:47.729754 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.754933 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.839677 273963 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1107 17:17:47.843908 273963 start.go:128] duration metric: createHost completed in 7.617038008s
I1107 17:17:47.843931 273963 start.go:83] releasing machines lock for "cilium-171301", held for 7.617622807s
I1107 17:17:47.844011 273963 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cilium-171301
I1107 17:17:47.870280 273963 ssh_runner.go:195] Run: systemctl --version
I1107 17:17:47.870346 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.870364 273963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1107 17:17:47.870434 273963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-171301
I1107 17:17:47.897797 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:47.898053 273963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49384 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/cilium-171301/id_rsa Username:docker}
I1107 17:17:48.013979 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1107 17:17:48.022299 273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
I1107 17:17:48.037257 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.110172 273963 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
I1107 17:17:48.198655 273963 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1107 17:17:48.210409 273963 cruntime.go:273] skipping containerd shutdown because we are bound to it
I1107 17:17:48.210475 273963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1107 17:17:48.222331 273963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1107 17:17:48.238231 273963 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1107 17:17:48.324359 273963 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1107 17:17:48.401465 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.479636 273963 ssh_runner.go:195] Run: sudo systemctl restart docker
I1107 17:17:48.709599 273963 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1107 17:17:48.829234 273963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1107 17:17:48.915216 273963 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
I1107 17:17:48.926795 273963 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1107 17:17:48.926878 273963 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1107 17:17:48.930979 273963 start.go:472] Will wait 60s for crictl version
I1107 17:17:48.931044 273963 ssh_runner.go:195] Run: sudo crictl version
I1107 17:17:48.968172 273963 start.go:481] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 20.10.20
RuntimeApiVersion: 1.41.0
I1107 17:17:48.968235 273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:49.004145 273963 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1107 17:17:48.776053 265599 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.776086 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1107 17:17:48.776141 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.780418 265599 kapi.go:59] client config for pause-171530: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.crt", KeyFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/profiles/pause-171530/client.key", CAFile:"/home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1786940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1107 17:17:48.783994 265599 addons.go:227] Setting addon default-storageclass=true in "pause-171530"
W1107 17:17:48.784033 265599 addons.go:236] addon default-storageclass should already be in state true
I1107 17:17:48.784066 265599 host.go:66] Checking if "pause-171530" exists ...
I1107 17:17:48.784533 265599 cli_runner.go:164] Run: docker container inspect pause-171530 --format={{.State.Status}}
I1107 17:17:48.791755 265599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:48.827118 265599 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:48.827146 265599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1107 17:17:48.827202 265599 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-171530
I1107 17:17:48.832614 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.844192 265599 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
I1107 17:17:48.858350 265599 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49369 SSHKeyPath:/home/jenkins/minikube-integration/15310-3679/.minikube/machines/pause-171530/id_rsa Username:docker}
I1107 17:17:48.935269 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1107 17:17:48.958923 265599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1107 17:17:49.187938 265599 pod_ready.go:92] pod "coredns-565d847f94-r6gbf" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.187970 265599 pod_ready.go:81] duration metric: took 396.174585ms waiting for pod "coredns-565d847f94-r6gbf" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.187985 265599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588753 265599 pod_ready.go:92] pod "etcd-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.588785 265599 pod_ready.go:81] duration metric: took 400.791096ms waiting for pod "etcd-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.588799 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.758403 265599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
I1107 17:17:49.040144 273963 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
I1107 17:17:49.040219 273963 cli_runner.go:164] Run: docker network inspect cilium-171301 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1107 17:17:49.069531 273963 ssh_runner.go:195] Run: grep 192.168.67.1 host.minikube.internal$ /etc/hosts
I1107 17:17:49.072992 273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 17:17:49.083058 273963 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
I1107 17:17:49.083116 273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:49.107581 273963 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:49.107611 273963 docker.go:543] Images already preloaded, skipping extraction
I1107 17:17:49.107668 273963 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1107 17:17:49.133204 273963 docker.go:613] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I1107 17:17:49.133245 273963 cache_images.go:84] Images are preloaded, skipping loading
I1107 17:17:49.133295 273963 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1107 17:17:49.206522 273963 cni.go:95] Creating CNI manager for "cilium"
I1107 17:17:49.206553 273963 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1107 17:17:49.206574 273963 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cilium-171301 NodeName:cilium-171301 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
I1107 17:17:49.206774 273963 kubeadm.go:161] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.67.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/cri-dockerd.sock
name: "cilium-171301"
kubeletExtraArgs:
node-ip: 192.168.67.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.25.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1107 17:17:49.206866 273963 kubeadm.go:962] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=cilium-171301 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:}
I1107 17:17:49.206924 273963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
I1107 17:17:49.215024 273963 binaries.go:44] Found k8s binaries, skipping transfer
I1107 17:17:49.215106 273963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1107 17:17:49.223091 273963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (475 bytes)
I1107 17:17:49.237727 273963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1107 17:17:49.251298 273963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2036 bytes)
I1107 17:17:49.265109 273963 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I1107 17:17:49.268700 273963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1107 17:17:49.278537 273963 certs.go:54] Setting up /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301 for IP: 192.168.67.2
I1107 17:17:49.278656 273963 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key
I1107 17:17:49.278710 273963 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key
I1107 17:17:49.278784 273963 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key
I1107 17:17:49.278798 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt with IP's: []
I1107 17:17:49.377655 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt ...
I1107 17:17:49.377689 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.crt: {Name:mk85045205a0f3cc9db16d3ba4384eb58e4d4170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.377932 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key ...
I1107 17:17:49.377950 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/client.key: {Name:mk22ddbbc0c35976a622861a2537590ceb2c3529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.378071 273963 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e
I1107 17:17:49.378101 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1107 17:17:49.717401 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e ...
I1107 17:17:49.717449 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e: {Name:mk1d0b418ed1d3c777ce02b789369b0a0920bca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.717668 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e ...
I1107 17:17:49.717686 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e: {Name:mkad3745d4acb3a4df279ae7d626aaef591fc7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.717800 273963 certs.go:320] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt
I1107 17:17:49.717875 273963 certs.go:324] copying /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key
I1107 17:17:49.717938 273963 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key
I1107 17:17:49.717957 273963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt with IP's: []
I1107 17:17:49.788111 273963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt ...
I1107 17:17:49.788144 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt: {Name:mk4ef43b9fbc1a2c60e066e8c2245294f6e4a088 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.788346 273963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key ...
I1107 17:17:49.788363 273963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key: {Name:mk3536bb270258df328f9904013708493e9e5cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1107 17:17:49.788581 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem (1338 bytes)
W1107 17:17:49.788630 273963 certs.go:384] ignoring /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129_empty.pem, impossibly tiny 0 bytes
I1107 17:17:49.788648 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca-key.pem (1679 bytes)
I1107 17:17:49.788683 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/ca.pem (1082 bytes)
I1107 17:17:49.788717 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/cert.pem (1123 bytes)
I1107 17:17:49.788750 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/certs/home/jenkins/minikube-integration/15310-3679/.minikube/certs/key.pem (1675 bytes)
I1107 17:17:49.788805 273963 certs.go:388] found cert: /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem (1708 bytes)
I1107 17:17:49.789402 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1107 17:17:49.809402 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1107 17:17:49.828363 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1107 17:17:49.851556 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/profiles/cilium-171301/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1107 17:17:49.875238 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1107 17:17:49.895507 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1107 17:17:49.917493 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1107 17:17:49.938898 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1107 17:17:49.958074 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1107 17:17:49.976967 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/certs/10129.pem --> /usr/share/ca-certificates/10129.pem (1338 bytes)
I1107 17:17:49.997249 273963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15310-3679/.minikube/files/etc/ssl/certs/101292.pem --> /usr/share/ca-certificates/101292.pem (1708 bytes)
I1107 17:17:50.022620 273963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1107 17:17:50.037986 273963 ssh_runner.go:195] Run: openssl version
I1107 17:17:50.043912 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10129.pem && ln -fs /usr/share/ca-certificates/10129.pem /etc/ssl/certs/10129.pem"
I1107 17:17:50.052548 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10129.pem
I1107 17:17:50.056053 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov 7 16:50 /usr/share/ca-certificates/10129.pem
I1107 17:17:50.056137 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10129.pem
I1107 17:17:50.061307 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10129.pem /etc/ssl/certs/51391683.0"
I1107 17:17:50.069615 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/101292.pem && ln -fs /usr/share/ca-certificates/101292.pem /etc/ssl/certs/101292.pem"
I1107 17:17:50.079805 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/101292.pem
I1107 17:17:50.084296 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov 7 16:50 /usr/share/ca-certificates/101292.pem
I1107 17:17:50.084356 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/101292.pem
I1107 17:17:50.090328 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/101292.pem /etc/ssl/certs/3ec20f2e.0"
I1107 17:17:50.099164 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1107 17:17:50.110113 273963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.114343 273963 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov 7 16:46 /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.114408 273963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1107 17:17:50.120637 273963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1107 17:17:50.130809 273963 kubeadm.go:396] StartCluster: {Name:cilium-171301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:cilium-171301 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:cilium NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
I1107 17:17:50.130955 273963 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1107 17:17:50.158917 273963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1107 17:17:50.166269 273963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1107 17:17:50.174871 273963 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
I1107 17:17:50.174936 273963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1107 17:17:50.184105 273963 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1107 17:17:50.184164 273963 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1107 17:17:50.239005 273963 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
I1107 17:17:50.239098 273963 kubeadm.go:317] [preflight] Running pre-flight checks
I1107 17:17:50.279571 273963 kubeadm.go:317] [preflight] The system verification failed. Printing the output from the verification:
I1107 17:17:50.279660 273963 kubeadm.go:317] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1021-gcp[0m
I1107 17:17:50.279716 273963 kubeadm.go:317] [0;37mOS[0m: [0;32mLinux[0m
I1107 17:17:50.279780 273963 kubeadm.go:317] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1107 17:17:50.279825 273963 kubeadm.go:317] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1107 17:17:50.279866 273963 kubeadm.go:317] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1107 17:17:50.279907 273963 kubeadm.go:317] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1107 17:17:50.279948 273963 kubeadm.go:317] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1107 17:17:50.279989 273963 kubeadm.go:317] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1107 17:17:50.280029 273963 kubeadm.go:317] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1107 17:17:50.280070 273963 kubeadm.go:317] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1107 17:17:50.280109 273963 kubeadm.go:317] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1107 17:17:50.359738 273963 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
I1107 17:17:50.359870 273963 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1107 17:17:50.359983 273963 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1107 17:17:50.504499 273963 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1107 17:17:49.760036 265599 addons.go:488] enableAddons completed in 1.028452371s
I1107 17:17:49.988064 265599 pod_ready.go:92] pod "kube-apiserver-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:49.988085 265599 pod_ready.go:81] duration metric: took 399.27917ms waiting for pod "kube-apiserver-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:49.988096 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387943 265599 pod_ready.go:92] pod "kube-controller-manager-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.387964 265599 pod_ready.go:81] duration metric: took 399.861996ms waiting for pod "kube-controller-manager-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.387975 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787240 265599 pod_ready.go:92] pod "kube-proxy-627q2" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:50.787266 265599 pod_ready.go:81] duration metric: took 399.283504ms waiting for pod "kube-proxy-627q2" in "kube-system" namespace to be "Ready" ...
I1107 17:17:50.787279 265599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187853 265599 pod_ready.go:92] pod "kube-scheduler-pause-171530" in "kube-system" namespace has status "Ready":"True"
I1107 17:17:51.187885 265599 pod_ready.go:81] duration metric: took 400.597643ms waiting for pod "kube-scheduler-pause-171530" in "kube-system" namespace to be "Ready" ...
I1107 17:17:51.187896 265599 pod_ready.go:38] duration metric: took 2.432339677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1107 17:17:51.187921 265599 api_server.go:51] waiting for apiserver process to appear ...
I1107 17:17:51.187970 265599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1107 17:17:51.198604 265599 api_server.go:71] duration metric: took 2.467050632s to wait for apiserver process to appear ...
I1107 17:17:51.198640 265599 api_server.go:87] waiting for apiserver healthz status ...
I1107 17:17:51.198650 265599 api_server.go:252] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1107 17:17:51.203228 265599 api_server.go:278] https://192.168.85.2:8443/healthz returned 200:
ok
I1107 17:17:51.204215 265599 api_server.go:140] control plane version: v1.25.3
I1107 17:17:51.204244 265599 api_server.go:130] duration metric: took 5.597242ms to wait for apiserver health ...
I1107 17:17:51.204255 265599 system_pods.go:43] waiting for kube-system pods to appear ...
I1107 17:17:51.389884 265599 system_pods.go:59] 7 kube-system pods found
I1107 17:17:51.389918 265599 system_pods.go:61] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.389923 265599 system_pods.go:61] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.389927 265599 system_pods.go:61] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.389932 265599 system_pods.go:61] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.389936 265599 system_pods.go:61] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.389940 265599 system_pods.go:61] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.389944 265599 system_pods.go:61] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.389949 265599 system_pods.go:74] duration metric: took 185.688763ms to wait for pod list to return data ...
I1107 17:17:51.389958 265599 default_sa.go:34] waiting for default service account to be created ...
I1107 17:17:51.587856 265599 default_sa.go:45] found service account: "default"
I1107 17:17:51.587885 265599 default_sa.go:55] duration metric: took 197.921282ms for default service account to be created ...
I1107 17:17:51.587896 265599 system_pods.go:116] waiting for k8s-apps to be running ...
I1107 17:17:51.791610 265599 system_pods.go:86] 7 kube-system pods found
I1107 17:17:51.791656 265599 system_pods.go:89] "coredns-565d847f94-r6gbf" [4070c2b0-f450-4494-afc9-30615ea8f3c9] Running
I1107 17:17:51.791666 265599 system_pods.go:89] "etcd-pause-171530" [2bab4f9b-1fd7-41f7-80f1-288a81b1f976] Running
I1107 17:17:51.791683 265599 system_pods.go:89] "kube-apiserver-pause-171530" [89321c65-1551-48c9-bcf0-f5f0b55eeaac] Running
I1107 17:17:51.791692 265599 system_pods.go:89] "kube-controller-manager-pause-171530" [6859f749-9d58-414c-b661-8e6fd4187af6] Running
I1107 17:17:51.791699 265599 system_pods.go:89] "kube-proxy-627q2" [177a31d0-df11-4105-9f5a-c3effe2fc965] Running
I1107 17:17:51.791707 265599 system_pods.go:89] "kube-scheduler-pause-171530" [aa44eb0b-2e5b-45b8-af10-a91256f3021a] Running
I1107 17:17:51.791717 265599 system_pods.go:89] "storage-provisioner" [225d8eea-c00a-46a3-8b89-abb34458db76] Running
I1107 17:17:51.791725 265599 system_pods.go:126] duration metric: took 203.823982ms to wait for k8s-apps to be running ...
I1107 17:17:51.791734 265599 system_svc.go:44] waiting for kubelet service to be running ....
I1107 17:17:51.791785 265599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1107 17:17:51.802112 265599 system_svc.go:56] duration metric: took 10.369415ms WaitForService to wait for kubelet.
I1107 17:17:51.802147 265599 kubeadm.go:573] duration metric: took 3.070599627s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
I1107 17:17:51.802170 265599 node_conditions.go:102] verifying NodePressure condition ...
I1107 17:17:51.987329 265599 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
I1107 17:17:51.987365 265599 node_conditions.go:123] node cpu capacity is 8
I1107 17:17:51.987379 265599 node_conditions.go:105] duration metric: took 185.202183ms to run NodePressure ...
I1107 17:17:51.987392 265599 start.go:217] waiting for startup goroutines ...
I1107 17:17:51.987763 265599 ssh_runner.go:195] Run: rm -f paused
I1107 17:17:52.043023 265599 start.go:506] kubectl: 1.25.3, cluster: 1.25.3 (minor skew: 0)
I1107 17:17:52.045707 265599 out.go:177] * Done! kubectl is now configured to use "pause-171530" cluster and "default" namespace by default
I1107 17:17:50.507106 273963 out.go:204] - Generating certificates and keys ...
I1107 17:17:50.507263 273963 kubeadm.go:317] [certs] Using existing ca certificate authority
I1107 17:17:50.507377 273963 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
I1107 17:17:50.666684 273963 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
I1107 17:17:50.780542 273963 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
I1107 17:17:50.844552 273963 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
I1107 17:17:50.965350 273963 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
I1107 17:17:51.084839 273963 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
I1107 17:17:51.084994 273963 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [cilium-171301 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
I1107 17:17:51.308472 273963 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
I1107 17:17:51.308615 273963 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [cilium-171301 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
I1107 17:17:51.778235 273963 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
I1107 17:17:52.391061 273963 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
I1107 17:17:52.518001 273963 kubeadm.go:317] [certs] Generating "sa" key and public key
I1107 17:17:52.518138 273963 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1107 17:17:52.701867 273963 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
I1107 17:17:52.811971 273963 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1107 17:17:53.225312 273963 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1107 17:17:53.274661 273963 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1107 17:17:53.287337 273963 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1107 17:17:53.288459 273963 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1107 17:17:53.288545 273963 kubeadm.go:317] [kubelet-start] Starting the kubelet
I1107 17:17:53.394876 273963 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1107 17:17:49.280257 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:49.280620 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:49.439027 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:49.464725 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:49.464794 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:49.487632 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:49.487702 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:49.515626 233006 logs.go:274] 0 containers: []
W1107 17:17:49.515655 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:49.515712 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:49.544438 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:49.544516 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:49.573888 233006 logs.go:274] 0 containers: []
W1107 17:17:49.573916 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:49.573964 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:49.600751 233006 logs.go:274] 0 containers: []
W1107 17:17:49.600780 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:49.600853 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:49.629515 233006 logs.go:274] 0 containers: []
W1107 17:17:49.629547 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:49.629601 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:49.660954 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:49.661005 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:49.661019 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:49.703297 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:49.703332 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:49.742169 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:49.742205 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:49.813899 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:49.813936 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:49.830714 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:49.830758 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:49.899172 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:49.899199 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:49.899211 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:49.976394 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:49.976437 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:50.052769 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:50.052802 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:50.086254 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:50.086283 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:50.119937 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:50.119972 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:50.156488 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:50.156536 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:50.186320 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:50.186346 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:52.707667 233006 api_server.go:252] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1107 17:17:52.708037 233006 api_server.go:268] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
I1107 17:17:52.938410 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I1107 17:17:52.965686 233006 logs.go:274] 2 containers: [80472286f6b2 0905efe1e29d]
I1107 17:17:52.965766 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I1107 17:17:52.992750 233006 logs.go:274] 1 containers: [6fec17665e36]
I1107 17:17:52.992825 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I1107 17:17:53.017709 233006 logs.go:274] 0 containers: []
W1107 17:17:53.017733 233006 logs.go:276] No container was found matching "coredns"
I1107 17:17:53.017788 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I1107 17:17:53.045447 233006 logs.go:274] 2 containers: [f7eaa38ca161 ec5fef71a1fc]
I1107 17:17:53.045524 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I1107 17:17:53.071606 233006 logs.go:274] 0 containers: []
W1107 17:17:53.071635 233006 logs.go:276] No container was found matching "kube-proxy"
I1107 17:17:53.071688 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I1107 17:17:53.095002 233006 logs.go:274] 0 containers: []
W1107 17:17:53.095032 233006 logs.go:276] No container was found matching "kubernetes-dashboard"
I1107 17:17:53.095090 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I1107 17:17:53.122895 233006 logs.go:274] 0 containers: []
W1107 17:17:53.122919 233006 logs.go:276] No container was found matching "storage-provisioner"
I1107 17:17:53.122971 233006 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I1107 17:17:53.148541 233006 logs.go:274] 2 containers: [8891a1b14e04 371287b3c0c6]
I1107 17:17:53.148583 233006 logs.go:123] Gathering logs for kube-scheduler [ec5fef71a1fc] ...
I1107 17:17:53.148594 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec5fef71a1fc"
I1107 17:17:53.181466 233006 logs.go:123] Gathering logs for Docker ...
I1107 17:17:53.181503 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I1107 17:17:53.203825 233006 logs.go:123] Gathering logs for describe nodes ...
I1107 17:17:53.203856 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W1107 17:17:53.269885 233006 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
** /stderr **
I1107 17:17:53.269910 233006 logs.go:123] Gathering logs for kube-apiserver [80472286f6b2] ...
I1107 17:17:53.269921 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80472286f6b2"
I1107 17:17:53.309836 233006 logs.go:123] Gathering logs for kube-apiserver [0905efe1e29d] ...
I1107 17:17:53.309876 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0905efe1e29d"
I1107 17:17:53.397994 233006 logs.go:123] Gathering logs for etcd [6fec17665e36] ...
I1107 17:17:53.398034 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6fec17665e36"
I1107 17:17:53.434553 233006 logs.go:123] Gathering logs for kube-scheduler [f7eaa38ca161] ...
I1107 17:17:53.434595 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7eaa38ca161"
I1107 17:17:53.515012 233006 logs.go:123] Gathering logs for kubelet ...
I1107 17:17:53.515049 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1107 17:17:53.590837 233006 logs.go:123] Gathering logs for dmesg ...
I1107 17:17:53.590881 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1107 17:17:53.608621 233006 logs.go:123] Gathering logs for kube-controller-manager [8891a1b14e04] ...
I1107 17:17:53.608659 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8891a1b14e04"
I1107 17:17:53.640909 233006 logs.go:123] Gathering logs for kube-controller-manager [371287b3c0c6] ...
I1107 17:17:53.640937 233006 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 371287b3c0c6"
I1107 17:17:53.684459 233006 logs.go:123] Gathering logs for container status ...
I1107 17:17:53.684503 233006 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1107 17:17:53.396707 273963 out.go:204] - Booting up control plane ...
I1107 17:17:53.396844 273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1107 17:17:53.398788 273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1107 17:17:53.400416 273963 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1107 17:17:53.402210 273963 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1107 17:17:53.404604 273963 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
*
* ==> Docker <==
* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:55 UTC. --
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.867503766Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 708aac62fe16d29b27b7e03823a98eca3e1f022eaaaae07b03b614462c34f61c], retrying...."
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.947874677Z" level=info msg="Removing stale sandbox 08bdce8089e979563c1c35fc2b9cb00ca97ae33cb7c45028d6147314b55324da (6d1abd3e30d792833852b3f43c7effc3075f17e2807dee93ee5437621536102e)"
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.949913639Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 9b7990a4868df38640a1a4b501d3861a71b30b34429e7e3c19b6f85cd55e5664 f88327d8868c8ad0f7411a8b72ba2baa71bca468214ef9b295ee84ffe8afcc29], retrying...."
Nov 07 17:17:23 pause-171530 dockerd[4467]: time="2022-11-07T17:17:23.982329624Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.027920216Z" level=info msg="Loading containers: done."
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040588525Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.040673581Z" level=info msg="Daemon has completed initialization"
Nov 07 17:17:24 pause-171530 systemd[1]: Started Docker Application Container Engine.
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.057458789Z" level=info msg="API listen on [::]:2376"
Nov 07 17:17:24 pause-171530 dockerd[4467]: time="2022-11-07T17:17:24.061113478Z" level=info msg="API listen on /var/run/docker.sock"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.523817703Z" level=info msg="ignoring event" container=7c093d736ba0305191d4e798ca0d308583b1c7463ad986b23c2d186951b7d0ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.529123877Z" level=info msg="ignoring event" container=42f2c39561b11166e1cca511011d19541e07606bda37d3d78a6b8d6324edba56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.531551274Z" level=info msg="ignoring event" container=c109021f97b0ec6487f090af18a20062a7df3c8845d39ce8fa8a5e3494da80ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536407618Z" level=info msg="ignoring event" container=bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.536463887Z" level=info msg="ignoring event" container=c9629a7195e0926d21d4aebeb78f3778a8379562c623cac143cfd8764639c395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.537822290Z" level=info msg="ignoring event" container=cdc8d9ab8c016ad1726c8ec69dafffa0822704571646314f8f002d64229b9dcd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.654013442Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.660623628Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.668658992Z" level=error msg="404d7bd895c853d22c917ec8770367d7a91dafd370c7b8959c3253e584e1eb5d cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.671711028Z" level=error msg="9dc3075461e2264f083ac8045d0398e1cb1b95857a3a65126bf2c8178945eb02 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.683178370Z" level=error msg="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730178512Z" level=error msg="1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.730223095Z" level=error msg="Handler for POST /v1.40/containers/1ca6e9485fa8aaf7657cec34a2aafba49fda2fe8d446b8f44f511ca7746e1c0d/start returned error: can't join IPC of container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512: container bc4811d3f9f168bfaa9567e8b88953c21b5104eba6a02a534a6b32e074d9a512 is not running"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.734363189Z" level=error msg="ca313e60699e88a95aade29a7a771b01943787674653d827c9ac778c304b7ee2 cleanup: failed to delete container from containerd: no such container"
Nov 07 17:17:28 pause-171530 dockerd[4467]: time="2022-11-07T17:17:28.889125639Z" level=error msg="b6069c474d48724ad6405cac869a299021de19f0e83735250a6669e95f84de98 cleanup: failed to delete container from containerd: no such container"
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
4e27fc3536146 6e38f40d628db 5 seconds ago Running storage-provisioner 0 869229924f7b0
2678c07441af4 5185b96f0becf 19 seconds ago Running coredns 2 c8ae9930fd89e
d128588b435c4 beaaf00edd38a 20 seconds ago Running kube-proxy 3 308cd3b6261d9
fa1fae9e3dd4c 6d23ec0e8b87e 24 seconds ago Running kube-scheduler 3 499d52ff7ec2d
9a2c93b7807eb 0346dbd74bcb9 24 seconds ago Running kube-apiserver 3 ca7019d32208a
c617e5f72b7e0 6039992312758 24 seconds ago Running kube-controller-manager 3 b2be7ef781078
240c58d21dba8 a8a176a5d5d69 24 seconds ago Running etcd 3 af4dddaaaab51
b6069c474d487 5185b96f0becf 27 seconds ago Created coredns 1 cdc8d9ab8c016
9dc3075461e22 0346dbd74bcb9 27 seconds ago Created kube-apiserver 2 c109021f97b0e
404d7bd895c85 6039992312758 27 seconds ago Created kube-controller-manager 2 7c093d736ba03
ca313e60699e8 6d23ec0e8b87e 27 seconds ago Created kube-scheduler 2 42f2c39561b11
1ca6e9485fa8a a8a176a5d5d69 27 seconds ago Created etcd 2 bc4811d3f9f16
d4737d2c0cc12 beaaf00edd38a 27 seconds ago Created kube-proxy 2 c9629a7195e09
*
* ==> coredns [2678c07441af] <==
* .:53
[INFO] plugin/reload: Running configuration SHA512 = f3fde9de6486f59fe260f641c8b45d450960379ea9d73a7fef0c1feac6c746730bd77c72d2092518703e00d94c78d1eec0c6cb3efcd4dc489238241cea4bf436
CoreDNS-1.9.3
linux/amd64, go1.18.2, 45b0a11
*
* ==> coredns [b6069c474d48] <==
*
*
* ==> describe nodes <==
* Name: pause-171530
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=pause-171530
kubernetes.io/os=linux
minikube.k8s.io/commit=a8d0d2851e022d93d0c1376f6d2f8095068de262
minikube.k8s.io/name=pause-171530
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2022_11_07T17_16_00_0700
minikube.k8s.io/version=v1.28.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Nov 2022 17:15:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: pause-171530
AcquireTime: <unset>
RenewTime: Mon, 07 Nov 2022 17:17:55 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:15:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Nov 2022 17:17:35 +0000 Mon, 07 Nov 2022 17:17:35 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: pause-171530
Capacity:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 304681132Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32871748Ki
pods: 110
System Info:
Machine ID: 996614ec4c814b87b7ec8ebee3d0e8c9
System UUID: 584d8003-5974-4bad-ab15-c1a6d30346fa
Boot ID: 08dd20cb-78b6-4f23-8a31-d42df46571b3
Kernel Version: 5.15.0-1021-gcp
OS Image: Ubuntu 20.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.20
Kubelet Version: v1.25.3
Kube-Proxy Version: v1.25.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-565d847f94-r6gbf 100m (1%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 103s
kube-system etcd-pause-171530 100m (1%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system kube-apiserver-pause-171530 250m (3%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system kube-controller-manager-pause-171530 200m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system kube-proxy-627q2 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 103s
kube-system kube-scheduler-pause-171530 100m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 115s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (9%!)(MISSING) 0 (0%!)(MISSING)
memory 170Mi (0%!)(MISSING) 170Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 100s kube-proxy
Normal Starting 19s kube-proxy
Normal NodeHasSufficientPID 2m7s (x4 over 2m7s) kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 2m7s (x4 over 2m7s) kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientMemory 2m7s (x4 over 2m7s) kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal Starting 115s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 115s kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 115s kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 115s kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeNotReady 115s kubelet Node pause-171530 status is now: NodeNotReady
Normal NodeAllocatableEnforced 115s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 105s kubelet Node pause-171530 status is now: NodeReady
Normal RegisteredNode 103s node-controller Node pause-171530 event: Registered Node pause-171530 in Controller
Normal Starting 25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25s (x8 over 25s) kubelet Node pause-171530 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 25s (x8 over 25s) kubelet Node pause-171530 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 25s (x7 over 25s) kubelet Node pause-171530 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 25s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 7s node-controller Node pause-171530 event: Registered Node pause-171530 in Controller
*
* ==> dmesg <==
* [ +0.004797] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
[ +0.006797] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=0000000007b82556
[ +0.007369] FS-Cache: O-key=[8] '7fa00f0200000000'
[ +0.004936] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
[ +0.006594] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=000000001524e9eb
[ +0.008729] FS-Cache: N-key=[8] '7fa00f0200000000'
[ +0.488901] FS-Cache: Duplicate cookie detected
[ +0.004717] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
[ +0.006779] FS-Cache: O-cookie d=00000000b1e64776{9p.inode} n=000000004d15690e
[ +0.007381] FS-Cache: O-key=[8] '8ea00f0200000000'
[ +0.004952] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
[ +0.006607] FS-Cache: N-cookie d=00000000b1e64776{9p.inode} n=00000000470ffc24
[ +0.008833] FS-Cache: N-key=[8] '8ea00f0200000000'
[Nov 7 16:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Nov 7 17:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000007] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +1.008285] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000005] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +2.011837] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000035] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[Nov 7 17:06] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000011] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[ +8.191212] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-46185e74412a
[ +0.000044] ll header: 00000000: 02 42 46 c3 83 d6 02 42 c0 a8 3a 02 08 00
[Nov 7 17:14] process 'docker/tmp/qemu-check072764330/check' started with executable stack
*
* ==> etcd [1ca6e9485fa8] <==
*
*
* ==> etcd [240c58d21dba] <==
* {"level":"info","ts":"2022-11-07T17:17:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-07T17:17:31.626Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.85.2:2380"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.85.2:2380"}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2022-11-07T17:17:31.628Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 3"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 4"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:pause-171530 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
{"level":"info","ts":"2022-11-07T17:17:33.042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.85.2:2379"}
{"level":"info","ts":"2022-11-07T17:17:33.044Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"warn","ts":"2022-11-07T17:17:43.326Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"152.41473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-565d847f94-r6gbf\" ","response":"range_response_count:1 size:5038"}
{"level":"info","ts":"2022-11-07T17:17:43.326Z","caller":"traceutil/trace.go:171","msg":"trace[1276518897] range","detail":"{range_begin:/registry/pods/kube-system/coredns-565d847f94-r6gbf; range_end:; response_count:1; response_revision:452; }","duration":"152.549915ms","start":"2022-11-07T17:17:43.174Z","end":"2022-11-07T17:17:43.326Z","steps":["trace[1276518897] 'agreement among raft nodes before linearized reading' (duration: 40.877163ms)","trace[1276518897] 'range keys from in-memory index tree' (duration: 111.462423ms)"],"step_count":2}
*
* ==> kernel <==
* 17:17:55 up 1:00, 0 users, load average: 3.46, 3.53, 2.55
Linux pause-171530 5.15.0-1021-gcp #28~20.04.1-Ubuntu SMP Mon Oct 17 11:37:54 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.5 LTS"
*
* ==> kube-apiserver [9a2c93b7807e] <==
* I1107 17:17:34.912107 1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1107 17:17:34.912439 1 controller.go:83] Starting OpenAPI AggregationController
I1107 17:17:34.912469 1 available_controller.go:491] Starting AvailableConditionController
I1107 17:17:34.912477 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1107 17:17:34.912134 1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1107 17:17:34.912451 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1107 17:17:34.920676 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1107 17:17:34.912711 1 controller.go:85] Starting OpenAPI controller
I1107 17:17:35.019428 1 shared_informer.go:262] Caches are synced for node_authorizer
I1107 17:17:35.019719 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1107 17:17:35.020233 1 shared_informer.go:262] Caches are synced for crd-autoregister
I1107 17:17:35.019789 1 cache.go:39] Caches are synced for autoregister controller
I1107 17:17:35.020532 1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1107 17:17:35.020562 1 apf_controller.go:305] Running API Priority and Fairness config worker
I1107 17:17:35.021059 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1107 17:17:35.038005 1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
I1107 17:17:35.688505 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1107 17:17:35.915960 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1107 17:17:36.540683 1 controller.go:616] quota admission added evaluator for: serviceaccounts
I1107 17:17:36.550675 1 controller.go:616] quota admission added evaluator for: deployments.apps
I1107 17:17:36.580888 1 controller.go:616] quota admission added evaluator for: daemonsets.apps
I1107 17:17:36.641282 1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1107 17:17:36.648284 1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1107 17:17:47.967954 1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1107 17:17:48.027205 1 controller.go:616] quota admission added evaluator for: endpoints
*
* ==> kube-apiserver [9dc3075461e2] <==
*
*
* ==> kube-controller-manager [404d7bd895c8] <==
*
*
* ==> kube-controller-manager [c617e5f72b7e] <==
* I1107 17:17:48.008590 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
I1107 17:17:48.008778 1 event.go:294] "Event occurred" object="pause-171530" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-171530 event: Registered Node pause-171530 in Controller"
I1107 17:17:48.008734 1 taint_manager.go:209] "Sending events to api server"
W1107 17:17:48.008888 1 node_lifecycle_controller.go:1058] Missing timestamp for Node pause-171530. Assuming now as a timestamp.
I1107 17:17:48.008920 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I1107 17:17:48.019368 1 shared_informer.go:262] Caches are synced for namespace
I1107 17:17:48.020195 1 shared_informer.go:262] Caches are synced for node
I1107 17:17:48.020224 1 range_allocator.go:166] Starting range CIDR allocator
I1107 17:17:48.020230 1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
I1107 17:17:48.020269 1 shared_informer.go:262] Caches are synced for cidrallocator
I1107 17:17:48.022046 1 shared_informer.go:262] Caches are synced for expand
I1107 17:17:48.023995 1 shared_informer.go:262] Caches are synced for attach detach
I1107 17:17:48.028885 1 shared_informer.go:262] Caches are synced for daemon sets
I1107 17:17:48.040695 1 shared_informer.go:262] Caches are synced for ReplicationController
I1107 17:17:48.059583 1 shared_informer.go:262] Caches are synced for disruption
I1107 17:17:48.093865 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
I1107 17:17:48.094015 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
I1107 17:17:48.094994 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
I1107 17:17:48.095030 1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1107 17:17:48.152712 1 shared_informer.go:262] Caches are synced for certificate-csrapproving
I1107 17:17:48.181359 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:17:48.224684 1 shared_informer.go:262] Caches are synced for resource quota
I1107 17:17:48.538831 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:17:48.624372 1 shared_informer.go:262] Caches are synced for garbage collector
I1107 17:17:48.624404 1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
*
* ==> kube-proxy [d128588b435c] <==
* I1107 17:17:35.802654 1 node.go:163] Successfully retrieved node IP: 192.168.85.2
I1107 17:17:35.802795 1 server_others.go:138] "Detected node IP" address="192.168.85.2"
I1107 17:17:35.802838 1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
I1107 17:17:35.823572 1 server_others.go:206] "Using iptables Proxier"
I1107 17:17:35.823628 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1107 17:17:35.823641 1 server_others.go:214] "Creating dualStackProxier for iptables"
I1107 17:17:35.823661 1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1107 17:17:35.823700 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:17:35.823862 1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
I1107 17:17:35.824181 1 server.go:661] "Version info" version="v1.25.3"
I1107 17:17:35.824201 1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:17:35.824705 1 config.go:226] "Starting endpoint slice config controller"
I1107 17:17:35.824729 1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1107 17:17:35.824729 1 config.go:317] "Starting service config controller"
I1107 17:17:35.824742 1 shared_informer.go:255] Waiting for caches to sync for service config
I1107 17:17:35.824785 1 config.go:444] "Starting node config controller"
I1107 17:17:35.824797 1 shared_informer.go:255] Waiting for caches to sync for node config
I1107 17:17:35.925677 1 shared_informer.go:262] Caches are synced for node config
I1107 17:17:35.925674 1 shared_informer.go:262] Caches are synced for endpoint slice config
I1107 17:17:35.925738 1 shared_informer.go:262] Caches are synced for service config
*
* ==> kube-proxy [d4737d2c0cc1] <==
*
*
* ==> kube-scheduler [ca313e60699e] <==
*
*
* ==> kube-scheduler [fa1fae9e3dd4] <==
* I1107 17:17:32.057264 1 serving.go:348] Generated self-signed cert in-memory
W1107 17:17:34.927696 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1107 17:17:34.927730 1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1107 17:17:34.927742 1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
W1107 17:17:34.927752 1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1107 17:17:35.026876 1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
I1107 17:17:35.026910 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 17:17:35.028404 1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
I1107 17:17:35.032408 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1107 17:17:35.032445 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1107 17:17:35.049814 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1107 17:17:35.150068 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Mon 2022-11-07 17:15:39 UTC, end at Mon 2022-11-07 17:17:56 UTC. --
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.471796 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.572533 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.673100 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.773944 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:34 pause-171530 kubelet[5996]: E1107 17:17:34.874639 5996 kubelet.go:2448] "Error getting node" err="node \"pause-171530\" not found"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.019502 5996 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.020405 5996 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.026112 5996 apiserver.go:52] "Watching apiserver"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.028911 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.029237 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034807 5996 kubelet_node_status.go:108] "Node was previously registered" node="pause-171530"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.034917 5996 kubelet_node_status.go:73] "Successfully registered node" node="pause-171530"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044165 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-xtables-lock\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044237 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxrf\" (UniqueName: \"kubernetes.io/projected/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-api-access-xlxrf\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044387 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kpcd\" (UniqueName: \"kubernetes.io/projected/4070c2b0-f450-4494-afc9-30615ea8f3c9-kube-api-access-2kpcd\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044450 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/177a31d0-df11-4105-9f5a-c3effe2fc965-lib-modules\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044482 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4070c2b0-f450-4494-afc9-30615ea8f3c9-config-volume\") pod \"coredns-565d847f94-r6gbf\" (UID: \"4070c2b0-f450-4494-afc9-30615ea8f3c9\") " pod="kube-system/coredns-565d847f94-r6gbf"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044514 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/177a31d0-df11-4105-9f5a-c3effe2fc965-kube-proxy\") pod \"kube-proxy-627q2\" (UID: \"177a31d0-df11-4105-9f5a-c3effe2fc965\") " pod="kube-system/kube-proxy-627q2"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.044543 5996 reconciler.go:169] "Reconciler: start to sync state"
Nov 07 17:17:35 pause-171530 kubelet[5996]: I1107 17:17:35.630307 5996 scope.go:115] "RemoveContainer" containerID="d4737d2c0cc12722054c6a67e64adfcb09ac5d35405d5f62738a911f119801f2"
Nov 07 17:17:37 pause-171530 kubelet[5996]: I1107 17:17:37.800520 5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Nov 07 17:17:44 pause-171530 kubelet[5996]: I1107 17:17:44.973868 5996 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.752701 5996 topology_manager.go:205] "Topology Admit Handler"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934212 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv8z\" (UniqueName: \"kubernetes.io/projected/225d8eea-c00a-46a3-8b89-abb34458db76-kube-api-access-4pv8z\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
Nov 07 17:17:49 pause-171530 kubelet[5996]: I1107 17:17:49.934319 5996 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/225d8eea-c00a-46a3-8b89-abb34458db76-tmp\") pod \"storage-provisioner\" (UID: \"225d8eea-c00a-46a3-8b89-abb34458db76\") " pod="kube-system/storage-provisioner"
*
* ==> storage-provisioner [4e27fc353614] <==
* I1107 17:17:50.349388 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1107 17:17:50.361550 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1107 17:17:50.361616 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1107 17:17:50.369430 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1107 17:17:50.369585 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"892faada-f17d-4afd-8626-0abe858770d6", APIVersion:"v1", ResourceVersion:"502", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb became leader
I1107 17:17:50.369661 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
I1107 17:17:50.470629 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-171530_dd9a8022-cc81-49c9-8ac6-84a620a09cdb!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-171530 -n pause-171530
helpers_test.go:261: (dbg) Run: kubectl --context pause-171530 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods:
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run: kubectl --context pause-171530 describe pod
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-171530 describe pod : exit status 1 (55.686107ms)
** stderr **
error: resource name may not be empty
** /stderr **
helpers_test.go:277: kubectl --context pause-171530 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (51.24s)